Sample records for model fits based

  1. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    PubMed

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  2. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  3. Simulation Study on Fit Indexes in CFA Based on Data with Slightly Distorted Simple Structure

    ERIC Educational Resources Information Center

    Beauducel, Andre; Wittmann, Werner W.

    2005-01-01

    Fit indexes were compared with respect to a specific type of model misspecification. Simple structure was violated with some secondary loadings that were present in the true models that were not specified in the estimated models. The c2 test, Comparative Fit Index, Goodness-of-Fit Index, Incremental Fit Index, Nonnormed Fit Index, root mean…

  4. Network growth models: A behavioural basis for attachment proportional to fitness

    NASA Astrophysics Data System (ADS)

    Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris

    2017-02-01

    Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.

  5. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  6. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  7. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  8. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  9. Comparative evaluation of a new lactation curve model for pasture-based Holstein-Friesian dairy cows.

    PubMed

    Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O

    2012-09-01

    Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions

    PubMed Central

    Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan

    2015-01-01

    In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417

  11. Genetic algorithm dynamics on a rugged landscape

    NASA Astrophysics Data System (ADS)

    Bornholdt, Stefan

    1998-04-01

    The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.

  12. DNA from fecal immunochemical test can replace stool for detection of colonic lesions using a microbiota-based model.

    PubMed

    Baxter, Nielson T; Koumpouras, Charles C; Rogers, Mary A M; Ruffin, Mack T; Schloss, Patrick D

    2016-11-14

    There is a significant demand for colorectal cancer (CRC) screening methods that are noninvasive, inexpensive, and capable of accurately detecting early stage tumors. It has been shown that models based on the gut microbiota can complement the fecal occult blood test and fecal immunochemical test (FIT). However, a barrier to microbiota-based screening is the need to collect and store a patient's stool sample. Using stool samples collected from 404 patients, we tested whether the residual buffer containing resuspended feces in FIT cartridges could be used in place of intact stool samples. We found that the bacterial DNA isolated from FIT cartridges largely recapitulated the community structure and membership of patients' stool microbiota and that the abundance of bacteria associated with CRC were conserved. We also found that models for detecting CRC that were generated using bacterial abundances from FIT cartridges were equally predictive as models generated using bacterial abundances from stool. These findings demonstrate the potential for using residual buffer from FIT cartridges in place of stool for microbiota-based screening for CRC. This may reduce the need to collect and process separate stool samples and may facilitate combining FIT and microbiota-based biomarkers into a single test. Additionally, FIT cartridges could constitute a novel data source for studying the role of the microbiome in cancer and other diseases.

  13. Introduction: Occam’s Razor (SOT - Fit for Purpose workshop introduction)

    EPA Science Inventory

    Mathematical models provide important, reproducible, and transparent information for risk-based decision making. However, these models must be constructed to fit the needs of the problem to be solved. A “fit for purpose” model is an abstraction of a complicated problem that allow...

  14. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    PubMed Central

    2010-01-01

    Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791

  15. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.

    PubMed

    Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin

    2010-05-18

    The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  16. A Comparison of Four Estimators of a Population Measure of Model Fit in Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Zhang, Wei

    2008-01-01

    A major issue in the utilization of covariance structure analysis is model fit evaluation. Recent years have witnessed increasing interest in various test statistics and so-called fit indexes, most of which are actually based on or closely related to F[subscript 0], a measure of model fit in the population. This study aims to provide a systematic…

  17. Transformation Model Choice in Nonlinear Regression Analysis of Fluorescence-based Serial Dilution Assays

    PubMed Central

    Fong, Youyi; Yu, Xuesong

    2016-01-01

    Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502

  18. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  19. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  20. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  1. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  2. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  3. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-01-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.

  4. Predictive modeling of surimi cake shelf life at different storage temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Yatong; Hou, Yanhua; Wang, Quanfu; Cui, Bingqing; Zhang, Xiangyu; Li, Xuepeng; Li, Yujin; Liu, Yuanping

    2017-04-01

    The Arrhenius model of the shelf life prediction which based on the TBARS index was established in this study. The results showed that the significant changed of AV, POV, COV and TBARS with temperature increased, and the reaction rate constants k was obtained by the first order reaction kinetics model. Then the secondary model fitting was based on the Arrhenius equation. There was the optimal fitting accuracy of TBARS in the first and the secondary model fitting (R2≥0.95). The verification test indicated that the relative error between the shelf life model prediction value and actual value was within ±10%, suggesting the model could predict the shelf life of surimi cake.

  5. Blueprint XAS: a Matlab-based toolbox for the fitting and analysis of XAS spectra.

    PubMed

    Delgado-Jaime, Mario Ulises; Mewis, Craig Philip; Kennepohl, Pierre

    2010-01-01

    Blueprint XAS is a new Matlab-based program developed to fit and analyse X-ray absorption spectroscopy (XAS) data, most specifically in the near-edge region of the spectrum. The program is based on a methodology that introduces a novel background model into the complete fit model and that is capable of generating any number of independent fits with minimal introduction of user bias [Delgado-Jaime & Kennepohl (2010), J. Synchrotron Rad. 17, 119-128]. The functions and settings on the five panels of its graphical user interface are designed to suit the needs of near-edge XAS data analyzers. A batch function allows for the setting of multiple jobs to be run with Matlab in the background. A unique statistics panel allows the user to analyse a family of independent fits, to evaluate fit models and to draw statistically supported conclusions. The version introduced here (v0.2) is currently a toolbox for Matlab. Future stand-alone versions of the program will also incorporate several other new features to create a full package of tools for XAS data processing.

  6. Model-based analysis of multi-shell diffusion MR data for tractography: How to get over fitting problems

    PubMed Central

    Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ

    2012-01-01

    In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356

  7. Analytical fitting model for rough-surface BRDF.

    PubMed

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  8. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  9. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    PubMed

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  10. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

  11. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378

  12. The Nonstationary Dynamics of Fitness Distributions: Asexual Model with Epistasis and Standing Variation

    PubMed Central

    Martin, Guillaume; Roques, Lionel

    2016-01-01

    Various models describe asexual evolution by mutation, selection, and drift. Some focus directly on fitness, typically modeling drift but ignoring or simplifying both epistasis and the distribution of mutation effects (traveling wave models). Others follow the dynamics of quantitative traits determining fitness (Fisher’s geometric model), imposing a complex but fixed form of mutation effects and epistasis, and often ignoring drift. In all cases, predictions are typically obtained in high or low mutation rate limits and for long-term stationary regimes, thus losing information on transient behaviors and the effect of initial conditions. Here, we connect fitness-based and trait-based models into a single framework, and seek explicit solutions even away from stationarity. The expected fitness distribution is followed over time via its cumulant generating function, using a deterministic approximation that neglects drift. In several cases, explicit trajectories for the full fitness distribution are obtained for arbitrary mutation rates and standing variance. For nonepistatic mutations, especially with beneficial mutations, this approximation fails over the long term but captures the early dynamics, thus complementing stationary stochastic predictions. The approximation also handles several diminishing returns epistasis models (e.g., with an optimal genotype); it can be applied at and away from equilibrium. General results arise at equilibrium, where fitness distributions display a “phase transition” with mutation rate. Beyond this phase transition, in Fisher’s geometric model, the full trajectory of fitness and trait distributions takes a simple form; robust to the details of the mutant phenotype distribution. Analytical arguments are explored regarding why and when the deterministic approximation applies. PMID:27770037

  13. Dark matter and MOND dynamical models of the massive spiral galaxy NGC 2841

    NASA Astrophysics Data System (ADS)

    Samurović, S.; Vudragović, A.; Jovanović, M.

    2015-08-01

    We study dynamical models of the massive spiral galaxy NGC 2841 using both the Newtonian models with Navarro-Frenk-White (NFW) and isothermal dark haloes, as well as various MOND (MOdified Newtonian Dynamics) models. We use the observations coming from several publicly available data bases: we use radio data, near-infrared photometry as well as spectroscopic observations. In our models, we find that both tested Newtonian dark matter approaches can successfully fit the observed rotational curve of NGC 2841. The three tested MOND models (standard, simple and, for the first time applied to another spiral galaxy than the Milky Way, Bekenstein's toy model) provide fits of the observed rotational curve with various degrees of success: the best result was obtained with the standard MOND model. For both approaches, Newtonian and MOND, the values of the mass-to-light ratios of the bulge are consistent with the predictions from the stellar population synthesis (SPS) based on the Salpeter initial mass function (IMF). Also, for Newtonian and simple and standard MOND models, the estimated stellar mass-to-light ratios of the disc agree with the predictions from the SPS models based on the Kroupa IMF, whereas the toy MOND model provides too low a value of the stellar mass-to-light ratio, incompatible with the predictions of the tested SPS models. In all our MOND models, we vary the distance to NGC 2841, and our best-fitting standard and toy models use the values higher than the Cepheid-based distance to the galaxy NGC 2841, and the best-fitting simple MOND model is based on the lower value of the distance. The best-fitting NFW model is inconsistent with the predictions of the Λ cold dark matter cosmology, because the inferred concentration index is too high for the established virial mass.

  14. A mathematical description of the inclusive fitness theory.

    PubMed

    Wakano, Joe Yuichiro; Ohtsuki, Hisashi; Kobayashi, Yutaka

    2013-03-01

    Recent developments in the inclusive fitness theory have revealed that the direction of evolution can be analytically predicted in a wider class of models than previously thought, such as those models dealing with network structure. This paper aims to provide a mathematical description of the inclusive fitness theory. Specifically, we provide a general framework based on a Markov chain that can implement basic models of inclusive fitness. Our framework is based on the probability distribution of "offspring-to-parent map", from which the key concepts of the theory, such as fitness function, relatedness and inclusive fitness, are derived in a straightforward manner. We prove theorems showing that inclusive fitness always provides a correct prediction on which of two competing genes more frequently appears in the long run in the Markov chain. As an application of the theorems, we prove a general formula of the optimal dispersal rate in the Wright's island model with recurrent mutations. We also show the existence of the critical mutation rate, which does not depend on the number of islands and below which a positive dispersal rate evolves. Our framework can also be applied to lattice or network structured populations. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  16. [Fitting of the reconstructed craniofacial hard and soft tissues based on 2-D digital radiographs].

    PubMed

    Feng, Yao-Pu; Qiao, Min; Zhou, Hong; Zhang, Yan-Ning; Si, Xin-Qin

    2017-02-01

    In this study, we reconstructed the craniofacial hard and soft tissues based on the data from digital cephalometric radiographs and laser scanning. The effective fitting of the craniofacial hard and soft tissues was performed in order to increase the level of orthognathic diagnosis and treatment, and promote the communication between doctors and patients. A small lead point was put on the face of a volunteer and frontal and lateral digital cephalometric radiographs were taken. 3-D reconstruction system of the craniofacial hard tissue based on 2-D digital radiograph was used to get the craniofacial hard tissue model by means of hard tissue deformation modeling. 3-D model of facial soft tissue was obtained by using laser scanning data. By matching the lead point coordinate, the hard tissue and soft tissue were fitted. The 3-D model of the craniofacial hard and soft tissues was rebuilt reflecting the real craniofacial tissue structure, and effective fitting of the craniofacial hard and soft tissues was realized. The effective reconstruction and fitting of the 3-D craniofacial structures have been realized, which lays a foundation for further orthognathic simulation and facial appearance prediction. The fitting result is reliable, and could be used in clinical practice.

  17. Peplau's Theory of Interpersonal Relations: An Alternate Factor Structure for Patient Experience Data?

    PubMed

    Hagerty, Thomas A; Samuels, William; Norcini-Pala, Andrea; Gigliotti, Eileen

    2017-04-01

    A confirmatory factor analysis of data from the responses of 12,436 patients to 16 items on the Consumer Assessment of Healthcare Providers and Systems-Hospital survey was used to test a latent factor structure based on Peplau's middle-range theory of interpersonal relations. A two-factor model based on Peplau's theory fit these data well, whereas a three-factor model also based on Peplau's theory fit them excellently and provided a suitable alternate factor structure for the data. Though neither the two- nor three-factor model fit as well as the original factor structure, these results support using Peplau's theory to demonstrate nursing's extensive contribution to the experiences of hospitalized patients.

  18. Genomic Prediction Accounting for Residual Heteroskedasticity

    PubMed Central

    Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.

    2015-01-01

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950

  19. FragFit: a web-application for interactive modeling of protein segments into cryo-EM density maps.

    PubMed

    Tiemann, Johanna K S; Rose, Alexander S; Ismer, Jochen; Darvish, Mitra D; Hilal, Tarek; Spahn, Christian M T; Hildebrand, Peter W

    2018-05-21

    Cryo-electron microscopy (cryo-EM) is a standard method to determine the three-dimensional structures of molecular complexes. However, easy to use tools for modeling of protein segments into cryo-EM maps are sparse. Here, we present the FragFit web-application, a web server for interactive modeling of segments of up to 35 amino acids length into cryo-EM density maps. The fragments are provided by a regularly updated database containing at the moment about 1 billion entries extracted from PDB structures and can be readily integrated into a protein structure. Fragments are selected based on geometric criteria, sequence similarity and fit into a given cryo-EM density map. Web-based molecular visualization with the NGL Viewer allows interactive selection of fragments. The FragFit web-application, accessible at http://proteinformatics.de/FragFit, is free and open to all users, without any login requirements.

  20. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  1. Mechanisms of complex network growth: Synthesis of the preferential attachment and fitness models

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael

    2018-06-01

    We analyze growth mechanisms of complex networks and focus on their validation by measurements. To this end we consider the equation Δ K =A (t ) (K +K0) Δ t , where K is the node's degree, Δ K is its increment, A (t ) is the aging constant, and K0 is the initial attractivity. This equation has been commonly used to validate the preferential attachment mechanism. We show that this equation is undiscriminating and holds for the fitness model [Caldarelli et al., Phys. Rev. Lett. 89, 258702 (2002), 10.1103/PhysRevLett.89.258702] as well. In other words, accepted method of the validation of the microscopic mechanism of network growth does not discriminate between "rich-gets-richer" and "good-gets-richer" scenarios. This means that the growth mechanism of many natural complex networks can be based on the fitness model rather than on the preferential attachment, as it was believed so far. The fitness model yields the long-sought explanation for the initial attractivity K0, an elusive parameter which was left unexplained within the framework of the preferential attachment model. We show that the initial attractivity is determined by the width of the fitness distribution. We also present the network growth model based on recursive search with memory and show that this model contains both the preferential attachment and the fitness models as extreme cases.

  2. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    PubMed Central

    Weiss, Brandi A.; Dardick, William

    2015-01-01

    This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897

  3. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.

    PubMed

    Weiss, Brandi A; Dardick, William

    2016-12-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.

  4. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  5. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  6. The effects of changes in physical fitness on academic performance among New York City youth.

    PubMed

    Bezold, Carla P; Konty, Kevin J; Day, Sophia E; Berger, Magdalena; Harr, Lindsey; Larkin, Michael; Napier, Melanie D; Nonas, Cathy; Saha, Subir; Harris, Tiffany G; Stark, James H

    2014-12-01

    To evaluate whether a change in fitness is associated with academic outcomes in New York City (NYC) middle-school students using longitudinal data and to evaluate whether this relationship is modified by student household poverty. This was a longitudinal study of 83,111 New York City middle-school students enrolled between 2006-2007 and 2011-2012. Fitness was measured as a composite percentile based on three fitness tests and categorized based on change from the previous year. The effect of the fitness change level on academic outcomes, measured as a composite percentile based on state standardized mathematics and English Language Arts test scores, was estimated using a multilevel growth model. Models were stratified by sex, and additional models were tested stratified by student household poverty. For both girls and boys, a substantial increase in fitness from the previous year resulted in a greater improvement in academic ranking than was seen in the reference group (girls: .36 greater percentile point improvement, 95% confidence interval: .09-.63; boys: .38 greater percentile point improvement, 95% confidence interval: .09-.66). A substantial decrease in fitness was associated with a decrease in academics in both boys and girls. Effects of fitness on academics were stronger in high-poverty boys and girls than in low-poverty boys and girls. Academic rankings improved for boys and girls who increased their fitness level by >20 percentile points compared to other students. Opportunities for increased physical fitness may be important to support academic performance. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  7. A new region-edge based level set model with applications to image segmentation

    NASA Astrophysics Data System (ADS)

    Zhi, Xuhao; Shen, Hong-Bin

    2018-04-01

    Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.

  8. A CAD System for Evaluating Footwear Fit

    NASA Astrophysics Data System (ADS)

    Savadkoohi, Bita Ture; de Amicis, Raffaele

    With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.

  9. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  10. A Person Fit Test for IRT Models for Polytomous Items

    ERIC Educational Resources Information Center

    Glas, C. A. W.; Dagohoy, Anna Villa T.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…

  11. Reverse engineering the gap gene network of Drosophila melanogaster.

    PubMed

    Perkins, Theodore J; Jaeger, Johannes; Reinitz, John; Glass, Leon

    2006-05-01

    A fundamental problem in functional genomics is to determine the structure and dynamics of genetic networks based on expression data. We describe a new strategy for solving this problem and apply it to recently published data on early Drosophila melanogaster development. Our method is orders of magnitude faster than current fitting methods and allows us to fit different types of rules for expressing regulatory relationships. Specifically, we use our approach to fit models using a smooth nonlinear formalism for modeling gene regulation (gene circuits) as well as models using logical rules based on activation and repression thresholds for transcription factors. Our technique also allows us to infer regulatory relationships de novo or to test network structures suggested by the literature. We fit a series of models to test several outstanding questions about gap gene regulation, including regulation of and by hunchback and the role of autoactivation. Based on our modeling results and validation against the experimental literature, we propose a revised network structure for the gap gene system. Interestingly, some relationships in standard textbook models of gap gene regulation appear to be unnecessary for or even inconsistent with the details of gap gene expression during wild-type development.

  12. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  13. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  14. Genomic Prediction Accounting for Residual Heteroskedasticity.

    PubMed

    Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M

    2015-11-12

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.

  15. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.

  16. How to constrain multi-objective calibrations of the SWAT model using water balance components

    USDA-ARS?s Scientific Manuscript database

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  17. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  18. Examining the dimensional structure models of secondary traumatic stress based on DSM-5 symptoms.

    PubMed

    Mordeno, Imelu G; Go, Geraldine P; Yangson-Serondo, April

    2017-02-01

    Latent factor structure of Secondary Traumatic Stress (STS) has been examined using Diagnostic Statistic Manual-IV (DSM-IV)'s Posttraumatic Stress Disorder (PTSD) nomenclature. With the advent of Diagnostic Statistic Manual-5 (DSM-5), there is an impending need to reexamine STS using DSM-5 symptoms in light of the most updated PTSD models in the literature. The study investigated and determined the best fitted PTSD models using DSM-5 PTSD criteria symptoms. Confirmatory factor analysis (CFA) was conducted to examine model fit using the Secondary Traumatic Stress Scale in 241 registered and practicing Filipino nurses (166 females and 75 males) who worked in the Philippines and gave direct nursing services to patients. Based on multiple fit indices, the results showed the 7-factor hybrid model, comprising of intrusion, avoidance, negative affect, anhedonia, externalizing behavior, anxious arousal, and dysphoric arousal factors has excellent fit to STS. This model asserts that: (1) hyperarousal criterion needs to be divided into anxious and dysphoric arousal factors; (2) symptoms characterizing negative and positive affect need to be separated to two separate factors, and; (3) a new factor would categorize externalized, self-initiated impulse and control-deficit behaviors. Comparison of nested and non-nested models showed Hybrid model to have superior fit over other models. The specificity of the symptom structure of STS based on DSM-5 PTSD criteria suggests having more specific interventions addressing the more elaborate symptom-groupings that would alleviate the condition of nurses exposed to STS on a daily basis. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  20. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  1. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  2. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  3. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    PubMed Central

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  4. Induced subgraph searching for geometric model fitting

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  5. A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means.

    PubMed

    Polak, Marike; de Rooij, Mark; Heiser, Willem J

    2012-09-01

    In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) criterion of irrelevance, which is a graphical, exploratory method for evaluating the "relevance" of dichotomous attitude items. We generalized this criterion to graded response items and quantified the relevance by fitting a unimodal smoother. The resulting goodness-of-fit was used to determine item fit and aggregated scale fit. Based on a simulation procedure, cutoff values were proposed for the measures of item fit. These cutoff values showed high power rates and acceptable Type I error rates. We present 2 applications of the OCM method. First, we apply the OCM method to personality data from the Developmental Profile; second, we analyze attitude data collected by Roberts and Laughlin (1996) concerning opinions of capital punishment.

  6. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  7. New formulation feed method in tariff model of solar PV in Indonesia

    NASA Astrophysics Data System (ADS)

    Djamal, Muchlishah Hadi; Setiawan, Eko Adhi; Setiawan, Aiman

    2017-03-01

    Geographically, Indonesia has 18 latitudes that correlated strongly with the potential of solar radiation for the implementation of solar photovoltaic (PV) technologies. This is becoming the basis assumption to develop a proportional model of Feed In Tariff (FIT), consequently the FIT will be vary, according to the various of latitudes in Indonesia. This paper proposed a new formulation of solar PV FIT based on the potential of solar radiation and some independent variables such as latitude, longitude, Levelized Cost of Electricity (LCOE), and also socio-economic. The Principal Component Regression (PCR) method is used to analyzed the correlation of six independent variables C1-C6 then three models of FIT are presented. Model FIT-2 is chosen because it has a small residual value and has higher financial benefit compared to the other models. This study reveals the value of variable FIT associated with solar energy potential in each region, can reduce the total FIT to be paid by the state around 80 billion rupiahs in 10 years of 1 MW photovoltaic operation at each 34 provinces in Indonesia.

  8. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  9. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.

    PubMed

    Drake, Andrew W; Klakamp, Scott L

    2007-01-10

    A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.

  10. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  11. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  12. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  13. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance.

    ERIC Educational Resources Information Center

    Cheung, Gordon W.; Rensvold, Roger B.

    2002-01-01

    Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…

  14. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    PubMed

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  15. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  16. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  17. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  18. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  19. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  20. Analysis of the Best-Fit Sky Model Produced Through Redundant Calibration of Interferometers

    NASA Astrophysics Data System (ADS)

    Storer, Dara; Pober, Jonathan

    2018-01-01

    21 cm cosmology provides unique insights into the formation of stars and galaxies in the early universe, and particularly the Epoch of Reionization. Detection of the 21 cm line is challenging because it is generally 4-5 magnitudes weaker than the emission from foreground sources, and therefore the instruments used for detection must be carefully designed and calibrated. 21 cm cosmology is primarily conducted using interferometers, which are difficult to calibrate because of their complex structure. Here I explore the relationship between sky-based calibration, which relies on an accurate and comprehensive sky model, and redundancy-based calibration, which makes use of redundancies in the orientation of the interferometer's dishes. In addition to producing calibration parameters, redundant calibration also produces a best fit model of the sky. In this work I examine that sky model and explore the possibility of using that best fit model as an additional input to improve on sky-based calibration.

  1. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  2. Clinical risk stratification model for advanced colorectal neoplasia in persons with negative fecal immunochemical test results.

    PubMed

    Jung, Yoon Suk; Park, Chan Hyuk; Kim, Nam Hee; Park, Jung Ho; Park, Dong Il; Sohn, Chong Il

    2018-01-01

    The fecal immunochemical test (FIT) has low sensitivity for detecting advanced colorectal neoplasia (ACRN); thus, a considerable portion of FIT-negative persons may have ACRN. We aimed to develop a risk-scoring model for predicting ACRN in FIT-negative persons. We reviewed the records of participants aged ≥40 years who underwent a colonoscopy and FIT during a health check-up. We developed a risk-scoring model for predicting ACRN in FIT-negative persons. Of 11,873 FIT-negative participants, 255 (2.1%) had ACRN. On the basis of the multivariable logistic regression model, point scores were assigned as follows among FIT-negative persons: age (per year from 40 years old), 1 point; current smoker, 10 points; overweight, 5 points; obese, 7 points; hypertension, 6 points; old cerebrovascular attack (CVA), 15 points. Although the proportion of ACRN in FIT-negative persons increased as risk scores increased (from 0.6% in the group with 0-4 points to 8.1% in the group with 35-39 points), it was significantly lower than that in FIT-positive persons (14.9%). However, there was no statistical difference between the proportion of ACRN in FIT-negative persons with ≥40 points and in FIT-positive persons (10.5% vs. 14.9%, P = 0.321). FIT-negative persons may need to undergo screening colonoscopy if they clinically have a high risk of ACRN. The scoring model based on age, smoking habits, overweight or obesity, hypertension, and old CVA may be useful in selecting and prioritizing FIT-negative persons for screening colonoscopy.

  3. Extreme value modelling of Ghana stock exchange index.

    PubMed

    Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe

    2015-01-01

    Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.

  4. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  5. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    NASA Astrophysics Data System (ADS)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  6. On the Model-Based Bootstrap with Missing Data: Obtaining a "P"-Value for a Test of Exact Fit

    ERIC Educational Resources Information Center

    Savalei, Victoria; Yuan, Ke-Hai

    2009-01-01

    Evaluating the fit of a structural equation model via bootstrap requires a transformation of the data so that the null hypothesis holds exactly in the sample. For complete data, such a transformation was proposed by Beran and Srivastava (1985) for general covariance structure models and applied to structural equation modeling by Bollen and Stine…

  7. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  8. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  9. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  10. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    NASA Astrophysics Data System (ADS)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  11. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  12. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  13. A Combined SRTM Digital Elevation Model for Zanjan State of Iran Based on the Corrective Surface Idea

    NASA Astrophysics Data System (ADS)

    Kiamehr, Ramin

    2016-04-01

    One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.

  14. Testing spectral models for stellar populations with star clusters - II. Results

    NASA Astrophysics Data System (ADS)

    González Delgado, Rosa M.; Cid Fernandes, Roberto

    2010-04-01

    High spectral resolution evolutionary synthesis models have become a routinely used ingredient in extragalactic work, and as such deserve thorough testing. Star clusters are ideal laboratories for such tests. This paper applies the spectral fitting methodology outlined in Paper I to a sample of clusters, mainly from the Magellanic Clouds and spanning a wide range in age and metallicity, fitting their integrated light spectra with a suite of modern evolutionary synthesis models for single stellar populations. The combinations of model plus spectral library employed in this investigation are Galaxev/STELIB, Vazdekis/MILES, SED@/GRANADA and Galaxev/MILES+GRANADA, which provide a representative sample of models currently available for spectral fitting work. A series of empirical tests are performed with these models, comparing the quality of the spectral fits and the values of age, metallicity and extinction obtained with each of them. A comparison is also made between the properties derived from these spectral fits and literature data on these nearby, well studied clusters. These comparisons are done with the general goal of providing useful feedback for model makers, as well as guidance to the users of such models. We find the following. (i) All models are able to derive ages that are in good agreement both with each other and with literature data, although ages derived from spectral fits are on average slightly older than those based on the S-colour-magnitude diagram (S-CMD) method as calibrated by Girardi et al. (ii) There is less agreement between the models for the metallicity and extinction. In particular, Galaxev/STELIB models underestimate the metallicity by ~0.6 dex, and the extinction is overestimated by 0.1 mag. (iii) New generations of models using the GRANADA and MILES libraries are superior to STELIB-based models both in terms of spectral fit quality and regarding the accuracy with which age and metallicity are retrieved. Accuracies of about 0.1 dex in age and 0.3 dex in metallicity can be achieved as long as the models are not extrapolated beyond their expected range of validity.

  15. Frequency dependence 3.0: an attempt at codifying the evolutionary ecology perspective.

    PubMed

    Metz, Johan A J; Geritz, Stefan A H

    2016-03-01

    The fitness concept and perforce the definition of frequency independent fitnesses from population genetics is closely tied to discrete time population models with non-overlapping generations. Evolutionary ecologists generally focus on trait evolution through repeated mutant substitutions in populations with complicated life histories. This goes with using the per capita invasion speed of mutants as their fitness. In this paper we develop a concept of frequency independence that attempts to capture the practical use of the term by ecologists, which although inspired by population genetics rarely fits its strict definition. We propose to call the invasion fitnesses of an eco-evolutionary model frequency independent when the phenotypes can be ranked by competitive strength, measured by who can invade whom. This is equivalent to the absence of weak priority effects, protected dimorphisms and rock-scissor-paper configurations. Our concept differs from that of Heino et al. (TREE 13:367-370, 1998) in that it is based only on the signs of the invasion fitnesses, whereas Heino et al. based their definitions on the structure of the feedback environment, summarising the effect of all direct and indirect interactions between individuals on fitness. As it turns out, according to our new definition an eco-evolutionary model has frequency independent fitnesses if and only if the effect of the feedback environment on the fitness signs can be summarised by a single scalar with monotonic effect. This may be compared with Heino et al.'s concept of trivial frequency dependence defined by the environmental feedback influencing fitness, and not just its sign, in a scalar manner, without any monotonicity restriction. As it turns out, absence of the latter restriction leaves room for rock-scissor-paper configurations. Since in 'realistic' (as opposed to toy) models frequency independence is exceedingly rare, we also define a concept of weak frequency dependence, which can be interpreted intuitively as almost frequency independence, and analyse in which sense and to what extent the restrictions on the potential model outcomes of the frequency independent case stay intact for models with weak frequency dependence.

  16. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  17. Predicting responses from Rasch measures.

    PubMed

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  18. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  19. Rank-based methods for modeling dependence between loss triangles.

    PubMed

    Côté, Marie-Pier; Genest, Christian; Abdallah, Anas

    2016-01-01

    In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.

  20. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  1. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  2. Development of an Advanced Respirator Fit-Test Headform

    PubMed Central

    Bergman, Michael S.; Zhuang, Ziqing; Hanson, David; Heimbuch, Brian K.; McDonald, Michael J.; Palmiero, Andrew J.; Shaffer, Ronald E.; Harnish, Delbert; Husband, Michael; Wander, Joseph D.

    2015-01-01

    Improved respirator test headforms are needed to measure the fit of N95 filtering facepiece respirators (FFRs) for protection studies against viable airborne particles. A Static (i.e., non-moving, non-speaking) Advanced Headform (StAH) was developed for evaluating the fit of N95 FFRs. The StAH was developed based on the anthropometric dimensions of a digital headform reported by the National Institute for Occupational Safety and Health (NIOSH) and has a silicone polymer skin with defined local tissue thicknesses. Quantitative fit factor evaluations were performed on seven N95 FFR models of various sizes and designs. Donnings were performed with and without a pre-test leak checking method. For each method, four replicate FFR samples of each of the seven models were tested with two donnings per replicate, resulting in a total of 56 tests per donning method. Each fit factor evaluation was comprised of three 86-sec exercises: “Normal Breathing” (NB, 11.2 liters per min (lpm)), “Deep Breathing” (DB, 20.4 lpm), then NB again. A fit factor for each exercise and an overall test fit factor were obtained. Analysis of variance methods were used to identify statistical differences among fit factors (analyzed as logarithms) for different FFR models, exercises, and testing methods. For each FFR model and for each testing method, the NB and DB fit factor data were not significantly different (P > 0.05). Significant differences were seen in the overall exercise fit factor data for the two donning methods among all FFR models (pooled data) and in the overall exercise fit factor data for the two testing methods within certain models. Utilization of the leak checking method improved the rate of obtaining overall exercise fit factors ≥100. The FFR models, which are expected to achieve overall fit factors ≥ 100 on human subjects, achieved overall exercise fit factors ≥ 100 on the StAH. Further research is needed to evaluate the correlation of FFRs fitted on the StAH to FFRs fitted on people. PMID:24369934

  3. Experimental rugged fitness landscape in protein sequence space.

    PubMed

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-12-20

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  4. Experimental Rugged Fitness Landscape in Protein Sequence Space

    PubMed Central

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-01-01

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728

  5. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  6. Bone-conduction circuit model for chinchilla part I: Defining parameters by fitting to air-conduction data

    NASA Astrophysics Data System (ADS)

    Bowers, Peter; Rosowski, John J.

    2018-05-01

    An air-conduction circuit model that will serve as the basis for a model of bone-conduction hearing is developed for chinchilla. The lumped-element model is based on the classic Zwislocki model of the human middle ear. Model parameters are fit to various measurements of chinchilla middle-ear transfer functions and impedances. The model is in agreement with studies of the effects of middle-ear cavity holes in experiments that require access to the middle-ear air space.

  7. Perceived sports competence mediates the relationship between childhood motor skill proficiency and adolescent physical activity and fitness: a longitudinal assessment.

    PubMed

    Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R

    2008-08-08

    The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.

  8. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  9. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. The Gold Medal Fitness Program: A Model for Teacher Change

    ERIC Educational Resources Information Center

    Wright, Jan; Konza, Deslea; Hearne, Doug; Okely, Tony

    2008-01-01

    Background: Following the 2000 Sydney Olympics, the NSW Premier, Mr Bob Carr, launched a school-based initiative in NSW government primary schools called the "Gold Medal Fitness Program" to encourage children to be fitter and more active. The Program was introduced into schools through a model of professional development, "Quality…

  11. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  12. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  13. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  14. The 6300 A O/1-D/ airglow and dissociative recombination

    NASA Technical Reports Server (NTRS)

    Wickwar, V. B.; Cogger, L. L.; Carlson, H. C.

    1974-01-01

    Measurements of night-time 6300 A airglow intensities at the Arecibo Observatory have been compared with dissociative recombination calculations based on electron densities derived from simultaneous incoherent backscatter measurements. The agreement indicates that the nightglow can be fully accounted for by dissociative recombination. The comparisons are examined to determine the importance of quenching, heavy ions, ionization above the F-layer peak, and the temperature parameter of the model atmosphere. Comparable fits between the observed and calculated intensities are found for several available model atmospheres. The least-squares fitting process, used to make the comparisons, produces comparable fits over a wide range of combinations of neutral densities and of reaction constants. Yet, the fitting places constraints upon the possible combinations; these constraints indicate that the latest laboratory chemical constants and densities extrapolated to a base altitude are mutually consistent.

  15. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  16. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  17. MODELING THE NONLINEAR CLUSTERING IN MODIFIED GRAVITY MODELS. I. A FITTING FORMULA FOR THE MATTER POWER SPECTRUM OF f(R) GRAVITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Gong-Bo, E-mail: gongbo@icosmology.info; Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX

    2014-04-01

    Based on a suite of N-body simulations of the Hu-Sawicki model of f(R) gravity with different sets of model and cosmological parameters, we develop a new fitting formula with a numeric code, MGHalofit, to calculate the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. We compare the MGHalofit predictions at various redshifts (z ≤ 1) to the f(R) simulations and find that the relative error of the MGHalofit fitting formula of P(k) is no larger than 6% at k ≤ 1 h Mpc{sup –1} and 12% at k in (1, 10] h Mpc{sup –1}, respectively. Based on a sensitivitymore » study of an ongoing and a future spectroscopic survey, we estimate the detectability of a signal of modified gravity described by the Hu-Sawicki model using the power spectrum up to quasi-nonlinear scales.« less

  18. Bayesian Revision of Residual Detection Power

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2013-01-01

    This paper addresses some issues with quality assessment and quality assurance in response surface modeling experiments executed in wind tunnels. The role of data volume on quality assurance for response surface models is reviewed. Specific wind tunnel response surface modeling experiments are considered for which apparent discrepancies exist between fit quality expectations based on implemented quality assurance tactics, and the actual fit quality achieved in those experiments. These discrepancies are resolved by using Bayesian inference to account for certain imperfections in the assessment methodology. Estimates of the fraction of out-of-tolerance model predictions based on traditional frequentist methods are revised to account for uncertainty in the residual assessment process. The number of sites in the design space for which residuals are out of tolerance is seen to exceed the number of sites where the model actually fails to fit the data. A method is presented to estimate how much of the design space in inadequately modeled by low-order polynomial approximations to the true but unknown underlying response function.

  19. PREdator: a python based GUI for data analysis, evaluation and fitting

    PubMed Central

    2014-01-01

    The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.

  20. Development and design of a late-model fitness test instrument based on LabView

    NASA Astrophysics Data System (ADS)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  1. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  2. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  3. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  4. Dynamical modeling and multi-experiment fitting with PottersWheel

    PubMed Central

    Maiwald, Thomas; Timmer, Jens

    2008-01-01

    Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583

  5. ThermoFit: A Set of Software Tools, Protocols and Schema for the Organization of Thermodynamic Data and for the Development, Maintenance, and Distribution of Internally Consistent Thermodynamic Data/Model Collections

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2013-12-01

    Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.

  6. Investigating the correlation of the U.S. Air Force Physical Fitness Test to combat-based fitness: a women-only study.

    PubMed

    Mitchell, Tarah; White, Edward D; Ritschel, Daniel

    2014-06-01

    The primary objective in this research involves determining the Air Force Physical Fitness Test's (AFPFT) predictability of combat fitness and whether measures within the AFPFT require modification to increase this predictability further. We recruited 60 female volunteers and compared their performance on the AFPFT to the Marine Combat Fitness Test, the proxy for combat fitness. We discovered little association between the two (R(2) of 0.35), however, this association significantly increased (adjusted R(2) of 0.56) when utilizing the raw scores of the AFPFT instead of using the gender/age scoring tables. Improving on these associations, we develop and propose a simple ordinary least squares regression model that minimally impacts the AFPFT testing routine. This two-event model for predicting combat fitness incorporates the 1.5-mile run along with the number of repetitions of a 30-lb dumbbell from chest height to overhead with arms extended during a 2-minute time span. These two events predicted combat fitness as assessed by the Marine Combat Fitness Test with an adjusted R(2) of 0.82. By adopting this model, we greatly improve the Air Force's ability to assess combat fitness for women. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  7. PET-based compartmental modeling of (124)I-A33 antibody: quantitative characterization of patient-specific tumor targeting in colorectal cancer.

    PubMed

    Zanzonico, Pat; Carrasquillo, Jorge A; Pandit-Taskar, Neeta; O'Donoghue, Joseph A; Humm, John L; Smith-Jones, Peter; Ruan, Shutian; Divgi, Chaitanya; Scott, Andrew M; Kemeny, Nancy E; Fong, Yuman; Wong, Douglas; Scheinberg, David; Ritter, Gerd; Jungbluth, Achem; Old, Lloyd J; Larson, Steven M

    2015-10-01

    The molecular specificity of monoclonal antibodies (mAbs) directed against tumor antigens has proven effective for targeted therapy of human cancers, as shown by a growing list of successful antibody-based drug products. We describe a novel, nonlinear compartmental model using PET-derived data to determine the "best-fit" parameters and model-derived quantities for optimizing biodistribution of intravenously injected (124)I-labeled antitumor antibodies. As an example of this paradigm, quantitative image and kinetic analyses of anti-A33 humanized mAb (also known as "A33") were performed in 11 colorectal cancer patients. Serial whole-body PET scans of (124)I-labeled A33 and blood samples were acquired and the resulting tissue time-activity data for each patient were fit to a nonlinear compartmental model using the SAAM II computer code. Excellent agreement was observed between fitted and measured parameters of tumor uptake, "off-target" uptake in bowel mucosa, blood clearance, tumor antigen levels, and percent antigen occupancy. This approach should be generally applicable to antibody-antigen systems in human tumors for which the masses of antigen-expressing tumor and of normal tissues can be estimated and for which antibody kinetics can be measured with PET. Ultimately, based on each patient's resulting "best-fit" nonlinear model, a patient-specific optimum mAb dose (in micromoles, for example) may be derived.

  8. Predicting the risk for colorectal cancer with personal characteristics and fecal immunochemical test.

    PubMed

    Li, Wen; Zhao, Li-Zhong; Ma, Dong-Wang; Wang, De-Zheng; Shi, Lei; Wang, Hong-Lei; Dong, Mo; Zhang, Shu-Yi; Cao, Lei; Zhang, Wei-Hua; Zhang, Xi-Peng; Zhang, Qing-Huai; Yu, Lin; Qin, Hai; Wang, Xi-Mo; Chen, Sam Li-Sheng

    2018-05-01

    We aimed to predict colorectal cancer (CRC) based on the demographic features and clinical correlates of personal symptoms and signs from Tianjin community-based CRC screening data.A total of 891,199 residents who were aged 60 to 74 and were screened in 2012 were enrolled. The Lasso logistic regression model was used to identify the predictors for CRC. Predictive validity was assessed by the receiver operating characteristic (ROC) curve. Bootstrapping method was also performed to validate this prediction model.CRC was best predicted by a model that included age, sex, education level, occupations, diarrhea, constipation, colon mucosa and bleeding, gallbladder disease, a stressful life event, family history of CRC, and a positive fecal immunochemical test (FIT). The area under curve (AUC) for the questionnaire with a FIT was 84% (95% CI: 82%-86%), followed by 76% (95% CI: 74%-79%) for a FIT alone, and 73% (95% CI: 71%-76%) for the questionnaire alone. With 500 bootstrap replications, the estimated optimism (<0.005) shows good discrimination in validation of prediction model.A risk prediction model for CRC based on a series of symptoms and signs related to enteric diseases in combination with a FIT was developed from first round of screening. The results of the current study are useful for increasing the awareness of high-risk subjects and for individual-risk-guided invitations or strategies to achieve mass screening for CRC.

  9. Multi-scale analysis of a household level agent-based model of landcover change.

    PubMed

    Evans, Tom P; Kelley, Hugh

    2004-08-01

    Scale issues have significant implications for the analysis of social and biophysical processes in complex systems. These same scale implications are likewise considerations for the design and application of models of landcover change. Scale issues have wide-ranging effects from the representativeness of data used to validate models to aggregation errors introduced in the model structure. This paper presents an analysis of how scale issues affect an agent-based model (ABM) of landcover change developed for a research area in the Midwest, USA. The research presented here explores how scale factors affect the design and application of agent-based landcover change models. The ABM is composed of a series of heterogeneous agents who make landuse decisions on a portfolio of cells in a raster-based programming environment. The model is calibrated using measures of fit derived from both spatial composition and spatial pattern metrics from multi-temporal landcover data interpreted from historical aerial photography. A model calibration process is used to find a best-fit set of parameter weights assigned to agents' preferences for different landuses (agriculture, pasture, timber production, and non-harvested forest). Previous research using this model has shown how a heterogeneous set of agents with differing preferences for a portfolio of landuses produces the best fit to landcover changes observed in the study area. The scale dependence of the model is explored by varying the resolution of the input data used to calibrate the model (observed landcover), ancillary datasets that affect land suitability (topography), and the resolution of the model landscape on which agents make decisions. To explore the impact of these scale relationships the model is run with input datasets constructed at the following spatial resolutions: 60, 90, 120, 150, 240, 300 and 480 m. The results show that the distribution of landuse-preference weights differs as a function of scale. In addition, with the gradient descent model fitting method used in this analysis the model was not able to converge to an acceptable fit at the 300 and 480 m spatial resolutions. This is a product of the ratio of the input cell resolution to the average parcel size in the landscape. This paper uses these findings to identify scale considerations in the design, development, validation and application of ABMs of landcover change.

  10. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  11. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    PubMed

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  12. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  13. Using resource modelling to inform decision making and service planning: the case of colorectal cancer screening in Ireland.

    PubMed

    Sharp, Linda; Tilson, Lesley; Whyte, Sophie; Ceilleachair, Alan O; Walsh, Cathal; Usher, Cara; Tappenden, Paul; Chilcott, James; Staines, Anthony; Barry, Michael; Comber, Harry

    2013-03-19

    Organised colorectal cancer screening is likely to be cost-effective, but cost-effectiveness results alone may not help policy makers to make decisions about programme feasibility or service providers to plan programme delivery. For these purposes, estimates of the impact on the health services of actually introducing screening in the target population would be helpful. However, these types of analyses are rarely reported. As an illustration of such an approach, we estimated annual health service resource requirements and health outcomes over the first decade of a population-based colorectal cancer screening programme in Ireland. A Markov state-transition model of colorectal neoplasia natural history was used. Three core screening scenarios were considered: (a) flexible sigmoidoscopy (FSIG) once at age 60, (b) biennial guaiac-based faecal occult blood tests (gFOBT) at 55-74 years, and (c) biennial faecal immunochemical tests (FIT) at 55-74 years. Three alternative FIT roll-out scenarios were also investigated relating to age-restricted screening (55-64 years) and staggered age-based roll-out across the 55-74 age group. Parameter estimates were derived from literature review, existing screening programmes, and expert opinion. Results were expressed in relation to the 2008 population (4.4 million people, of whom 700,800 were aged 55-74). FIT-based screening would deliver the greatest health benefits, averting 164 colorectal cancer cases and 272 deaths in year 10 of the programme. Capacity would be required for 11,095-14,820 diagnostic and surveillance colonoscopies annually, compared to 381-1,053 with FSIG-based, and 967-1,300 with gFOBT-based, screening. With FIT, in year 10, these colonoscopies would result in 62 hospital admissions for abdominal bleeding, 27 bowel perforations and one death. Resource requirements for pathology, diagnostic radiology, radiotherapy and colorectal resection were highest for FIT. Estimates depended on screening uptake. Alternative FIT roll-out scenarios had lower resource requirements. While FIT-based screening would quite quickly generate attractive health outcomes, it has heavy resource requirements. These could impact on the feasibility of a programme based on this screening modality. Staggered age-based roll-out would allow time to increase endoscopy capacity to meet programme requirements. Resource modelling of this type complements conventional cost-effectiveness analyses and can help inform policy making and service planning.

  14. Factor Structure of the Hare Psychopathy Checklist: Youth Version (PCL: YV) in Adolescent Females

    ERIC Educational Resources Information Center

    Kosson, David S.; Neumann, Craig S.; Forth, Adelle E.; Salekin, Randall T.; Hare, Robert D.; Krischer, Maya K.; Sevecke, Kathrin

    2013-01-01

    Despite substantial evidence for the fit of the 3- and 4-factor models of Psychopathy Checklist-based ratings of psychopathy in adult males and adolescents, evidence is less consistent in adolescent females. However, prior studies used samples much smaller than recommended for examining model fit. To address this issue, we conducted a confirmatory…

  15. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  16. Introduction to Architectures: HSCB Information - What It Is and How It Fits (or Doesn’t Fit)

    DTIC Science & Technology

    2010-10-01

    Simulation Interoperability Workshop, 01E- SIW -080 [15] Barry G. Silverman, Gnana Gharathy, Kevin O’Brien, Jason Cornwell, “Human Behavior Models for Agents...Workshop, 10F- SIW -023, September 2010. [17] Christiansen, John H., “A flexible object-based software framework for modelling complex systems with

  17. Gonioreflectometric properties of metal surfaces

    NASA Astrophysics Data System (ADS)

    Jaanson, P.; Manoocheri, F.; Mäntynen, H.; Gergely, M.; Widlowski, J.-L.; Ikonen, E.

    2014-12-01

    Angularly resolved measurements of scattered light from surfaces can provide useful information in various fields of research and industry, such as computer graphics, satellite based Earth observation etc. In practice, empirical or physics-based models are needed to interpolate the measurement results, because a thorough characterization of the surfaces under all relevant conditions may not be feasible. In this work, plain and anodized metal samples were prepared and measured optically for bidirectional reflectance distribution function (BRDF) and mechanically for surface roughness. Two models for BRDF (Torrance-Sparrow model and a polarimetric BRDF model) were fitted to the measured values. A better fit was obtained for plain metal surfaces than for anodized surfaces.

  18. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  19. Percolation on fitness landscapes: effects of correlation, phenotype, and incompatibilities

    PubMed Central

    Gravner, Janko; Pitman, Damien; Gavrilets, Sergey

    2009-01-01

    We study how correlations in the random fitness assignment may affect the structure of fitness landscapes, in three classes of fitness models. The first is a phenotype space in which individuals are characterized by a large number n of continuously varying traits. In a simple model of random fitness assignment, viable phenotypes are likely to form a giant connected cluster percolating throughout the phenotype space provided the viability probability is larger than 1/2n. The second model explicitly describes genotype-to-phenotype and phenotype-to-fitness maps, allows for neutrality at both phenotype and fitness levels, and results in a fitness landscape with tunable correlation length. Here, phenotypic neutrality and correlation between fitnesses can reduce the percolation threshold, and correlations at the point of phase transition between local and global are most conducive to the formation of the giant cluster. In the third class of models, particular combinations of alleles or values of phenotypic characters are “incompatible” in the sense that the resulting genotypes or phenotypes have zero fitness. This setting can be viewed as a generalization of the canonical Bateson-Dobzhansky-Muller model of speciation and is related to K- SAT problems, prominent in computer science. We analyze the conditions for the existence of viable genotypes, their number, as well as the structure and the number of connected clusters of viable genotypes. We show that analysis based on expected values can easily lead to wrong conclusions, especially when fitness correlations are strong. We focus on pairwise incompatibilities between diallelic loci, but we also address multiple alleles, complex incompatibilities, and continuous phenotype spaces. In the case of diallelic loci, the number of clusters is stochastically bounded and each cluster contains a very large sub-cube. Finally, we demonstrate that the discrete NK model shares some signature properties of models with high correlations. PMID:17692873

  20. A Division-Dependent Compartmental Model for Computing Cell Numbers in CFSE-based Lymphocyte Proliferation Assays

    DTIC Science & Technology

    2012-02-12

    is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data

  1. Dual learning processes underlying human decision-making in reversal learning tasks: functional significance and evidence from the model fit to human behavior

    PubMed Central

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2014-01-01

    Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635

  2. Ultra wideband (0.5-16 kHz) MR elastography for robust shear viscoelasticity model identification.

    PubMed

    Liu, Yifei; Yasar, Temel K; Royston, Thomas J

    2014-12-21

    Changes in the viscoelastic parameters of soft biological tissues often correlate with progression of disease, trauma or injury, and response to treatment. Identifying the most appropriate viscoelastic model, then estimating and monitoring the corresponding parameters of that model can improve insight into the underlying tissue structural changes. MR Elastography (MRE) provides a quantitative method of measuring tissue viscoelasticity. In a previous study by the authors (Yasar et al 2013 Magn. Reson. Med. 70 479-89), a silicone-based phantom material was examined over the frequency range of 200 Hz-7.75 kHz using MRE, an unprecedented bandwidth at that time. Six viscoelastic models including four integer order models and two fractional order models, were fit to the wideband viscoelastic data (measured storage and loss moduli as a function of frequency). The 'fractional Voigt' model (spring and springpot in parallel) exhibited the best fit and was even able to fit the entire frequency band well when it was identified based only on a small portion of the band. This paper is an extension of that study with a wider frequency range from 500 Hz to 16 kHz. Furthermore, more fractional order viscoelastic models are added to the comparison pool. It is found that added complexity of the viscoelastic model provides only marginal improvement over the 'fractional Voigt' model. And, again, the fractional order models show significant improvement over integer order viscoelastic models that have as many or more fitting parameters.

  3. Suicide risk factors for young adults: testing a model across ethnicities.

    PubMed

    Gutierrez, P M; Rodriguez, P J; Garcia, P

    2001-06-01

    A general path model based on existing suicide risk research was developed to test factors contributing to current suicidal ideation in young adults. A sample of 673 undergraduate students completed a packet of questionnaires containing the Beck Depression Inventory, Adult Suicidal Ideation Questionnaire, and Multi-Attitude Suicide Tendency Scale. They also provided information on history of suicidality and exposure to attempted and completed suicide in others. Structural equation modeling was used to test the fit of the data to the hypothesized model. Goodness-of-fit indices were adequate and supported the interactive effects of exposure, repulsion by life, depression, and history of self-harm on current ideation. Model fit for three subgroups based on race/ethnicity (i.e., White, Black, and Hispanic) determined that repulsion by life and depression function differently across groups. Implications of these findings for current methods of suicide risk assessment and future research are discussed in the context of the importance of culture.

  4. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  5. Random-growth urban model with geographical fitness

    NASA Astrophysics Data System (ADS)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  6. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    PubMed

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra?

    NASA Astrophysics Data System (ADS)

    Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas

    2011-02-01

    Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.

  8. History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.

    PubMed

    Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A

    2015-07-01

    Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.

  9. Health-Related Fitness Knowledge Development through Project-Based Learning

    ERIC Educational Resources Information Center

    Hastle, Peter A.; Chen, Senlin; Guarino, Anthony J.

    2017-01-01

    Purpose: The purpose of this study was to examine the process and outcome of an intervention using the project-based learning (PBL) model to increase students' health-related fitness (HRF) knowledge. Method: The participants were 185 fifth-grade students from three schools in Alabama (PBL group: n = 109; control group: n = 76). HRF knowledge was…

  10. Regression-Based Norms for a Bi-factor Model for Scoring the Brief Test of Adult Cognition by Telephone (BTACT).

    PubMed

    Gurnani, Ashita S; John, Samantha E; Gavett, Brandon E

    2015-05-01

    The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  12. Model selection for the North American Breeding Bird Survey: A comparison of methods

    USGS Publications Warehouse

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  13. Optimum extrusion-cooking conditions for improving physical properties of fish-cereal based snacks by response surface methodology.

    PubMed

    Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G

    2014-09-01

    To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.

  14. Development of a modified independent parallel reactions kinetic model and comparison with the distributed activation energy model for the pyrolysis of a wide variety of biomass fuels.

    PubMed

    Sfakiotakis, Stelios; Vamvuka, Despina

    2015-12-01

    The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  16. Enabling Accessibility Through Model-Based User Interface Development.

    PubMed

    Ziegler, Daniel; Peissner, Matthias

    2017-01-01

    Adaptive user interfaces (AUIs) can increase the accessibility of interactive systems. They provide personalized display and interaction modes to fit individual user needs. Most AUI approaches rely on model-based development, which is considered relatively demanding. This paper explores strategies to make model-based development more attractive for mainstream developers.

  17. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  18. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  19. The AKARI IRC asteroid flux catalogue: updated diameters and albedos

    NASA Astrophysics Data System (ADS)

    Alí-Lagoa, V.; Müller, T. G.; Usui, F.; Hasegawa, S.

    2018-05-01

    The AKARI IRC all-sky survey provided more than twenty thousand thermal infrared observations of over five thousand asteroids. Diameters and albedos were obtained by fitting an empirically calibrated version of the standard thermal model to these data. After the publication of the flux catalogue in October 2016, our aim here is to present the AKARI IRC all-sky survey data and discuss valuable scientific applications in the field of small body physical properties studies. As an example, we update the catalogue of asteroid diameters and albedos based on AKARI using the near-Earth asteroid thermal model (NEATM). We fit the NEATM to derive asteroid diameters and, whenever possible, infrared beaming parameters. We fit groups of observations taken for the same object at different epochs of the survey separately, so we compute more than one diameter for approximately half of the catalogue. We obtained a total of 8097 diameters and albedos for 5170 asteroids, and we fitted the beaming parameter for almost two thousand of them. When it was not possible to fit the beaming parameter, we used a straight line fit to our sample's beaming parameter-versus-phase angle plot to set the default value for each fit individually instead of using a single average value. Our diameters agree with stellar-occultation-based diameters well within the accuracy expected for the model. They also match the previous AKARI-based catalogue at phase angles lower than 50°, but we find a systematic deviation at higher phase angles, at which near-Earth and Mars-crossing asteroids were observed. The AKARI IRC All-sky survey is an essential source of information about asteroids, especially the large ones, since, it provides observations at different observation geometries, rotational coverages and aspect angles. For example, by comparing in more detail a few asteroids for which dimensions were derived from occultations, we discuss how the multiple observations per object may already provide three-dimensional information about elongated objects even based on an idealised model like the NEATM. Finally, we enumerate additional expected applications for more complex models, especially in combination with other catalogues. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A85

  20. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    PubMed

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  1. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  2. The good, the bad and the dubious: VHELIBS, a validation helper for ligands and binding sites

    PubMed Central

    2013-01-01

    Background Many Protein Data Bank (PDB) users assume that the deposited structural models are of high quality but forget that these models are derived from the interpretation of experimental data. The accuracy of atom coordinates is not homogeneous between models or throughout the same model. To avoid basing a research project on a flawed model, we present a tool for assessing the quality of ligands and binding sites in crystallographic models from the PDB. Results The Validation HElper for LIgands and Binding Sites (VHELIBS) is software that aims to ease the validation of binding site and ligand coordinates for non-crystallographers (i.e., users with little or no crystallography knowledge). Using a convenient graphical user interface, it allows one to check how ligand and binding site coordinates fit to the electron density map. VHELIBS can use models from either the PDB or the PDB_REDO databank of re-refined and re-built crystallographic models. The user can specify threshold values for a series of properties related to the fit of coordinates to electron density (Real Space R, Real Space Correlation Coefficient and average occupancy are used by default). VHELIBS will automatically classify residues and ligands as Good, Dubious or Bad based on the specified limits. The user is also able to visually check the quality of the fit of residues and ligands to the electron density map and reclassify them if needed. Conclusions VHELIBS allows inexperienced users to examine the binding site and the ligand coordinates in relation to the experimental data. This is an important step to evaluate models for their fitness for drug discovery purposes such as structure-based pharmacophore development and protein-ligand docking experiments. PMID:23895374

  3. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE PAGES

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...

    2018-01-22

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  4. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  5. Perceived sports competence mediates the relationship between childhood motor skill proficiency and adolescent physical activity and fitness: a longitudinal assessment

    PubMed Central

    Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R

    2008-01-01

    Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148

  6. Trial-dependent psychometric functions accounting for perceptual learning in 2-AFC discrimination tasks.

    PubMed

    Kattner, Florian; Cochrane, Aaron; Green, C Shawn

    2017-09-01

    The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.

  7. Starting participation in an employee fitness program: attitudes, social influence, and self-efficacy.

    PubMed

    Lechner, L; De Vries, H

    1995-11-01

    This article presents a study of the determinants of starting participation in an employee fitness program. Information from 488 employees, recruited from two worksites, was obtained. From these employees the determinants of participation were studied. A questionnaire based on two theoretical models was used. The Stages of Change model was used to measure the health behavior, consisting of precontemplation (no intention to participate), contemplation (considering participation), preparation (intending to participate within a short period), and action (participating in fitness). The possible determinants were measured according to the ASE model, including the attitude toward an employee fitness program, social influence, and self-efficacy expectations. Subjects in action stage were most convinced of the benefits of participation in the employee fitness program and of their own skills to participate in a fitness program. Subjects in precontemplation stage were least convinced of the advantages of participation and had the lowest self-efficacy scores. Subjects in action stage experienced the most social support to participate in the employee fitness program. Health education for employees within industrial fitness programs can be tailored toward their motivational stage. Promotional activities for industrial fitness programs should concentrate on persons in the precontemplation and contemplation stages, since people in these stages are insufficiently convinced of the advantages of a fitness program and expect many problems with regard to their ability to participate in the program.

  8. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    PubMed

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. The role of social capital and community belongingness for exercise adherence: An exploratory study of the CrossFit gym model.

    PubMed

    Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie

    2016-08-01

    This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.

  10. Double Trouble at High Density: Cross-Level Test of Resource-Related Adaptive Plasticity and Crowding-Related Fitness

    PubMed Central

    Gergs, André; Preuss, Thomas G.; Palmqvist, Annemette

    2014-01-01

    Population size is often regulated by negative feedback between population density and individual fitness. At high population densities, animals run into double trouble: they might concurrently suffer from overexploitation of resources and also from negative interference among individuals regardless of resource availability, referred to as crowding. Animals are able to adapt to resource shortages by exhibiting a repertoire of life history and physiological plasticities. In addition to resource-related plasticity, crowding might lead to reduced fitness, with consequences for individual life history. We explored how different mechanisms behind resource-related plasticity and crowding-related fitness act independently or together, using the water flea Daphnia magna as a case study. For testing hypotheses related to mechanisms of plasticity and crowding stress across different biological levels, we used an individual-based population model that is based on dynamic energy budget theory. Each of the hypotheses, represented by a sub-model, is based on specific assumptions on how the uptake and allocation of energy are altered under conditions of resource shortage or crowding. For cross-level testing of different hypotheses, we explored how well the sub-models fit individual level data and also how well they predict population dynamics under different conditions of resource availability. Only operating resource-related and crowding-related hypotheses together enabled accurate model predictions of D. magna population dynamics and size structure. Whereas this study showed that various mechanisms might play a role in the negative feedback between population density and individual life history, it also indicated that different density levels might instigate the onset of the different mechanisms. This study provides an example of how the integration of dynamic energy budget theory and individual-based modelling can facilitate the exploration of mechanisms behind the regulation of population size. Such understanding is important for assessment, management and the conservation of populations and thereby biodiversity in ecosystems. PMID:24626228

  11. Effect of test exercises and mask donning on measured respirator fit.

    PubMed

    Crutchfield, C D; Fairbank, E O; Greenstein, S L

    1999-12-01

    Quantitative respirator fit test protocols are typically defined by a series of fit test exercises. A rationale for the protocols that have been developed is generally not available. There also is little information available that describes the effect or effectiveness of the fit test exercises currently specified in respiratory protection standards. This study was designed to assess the relative impact of fit test exercises and mask donning on respirator fit as measured by a controlled negative pressure and an ambient aerosol fit test system. Multiple donnings of two different sizes of identical respirator models by each of 14 test subjects showed that donning affects respirator fit to a greater degree than fit test exercises. Currently specified fit test protocols emphasize test exercises, and the determination of fit is based on a single mask donning. A rationale for a modified fit test protocol based on fewer, more targeted test exercises and multiple mask donnings is presented. The modified protocol identified inadequately fitting respirators as effectively as the currently specified Occupational Safety and Health Administration (OSHA) quantitative fit test protocol. The controlled negative pressure system measured significantly (p < 0.0001) more respirator leakage than the ambient aerosol fit test system. The bend over fit test exercise was found to be predictive of poor respirator fit by both fit test systems. For the better fitting respirators, only the talking exercise generated aerosol fit factors that were significantly lower (p < 0.0001) than corresponding donning fit factors.

  12. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  13. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  14. Nonbleeding adenomas: Evidence of systematic false-negative fecal immunochemical test results and their implications for screening effectiveness-A modeling study.

    PubMed

    van der Meulen, Miriam P; Lansdorp-Vogelaar, Iris; van Heijningen, Else-Mariëtte B; Kuipers, Ernst J; van Ballegooijen, Marjolein

    2016-06-01

    If some adenomas do not bleed over several years, they will cause systematic false-negative fecal immunochemical test (FIT) results. The long-term effectiveness of FIT screening has been estimated without accounting for such systematic false-negativity. There are now data with which to evaluate this issue. The authors developed one microsimulation model (MISCAN [MIcrosimulation SCreening ANalysis]-Colon) without systematic false-negative FIT results and one model that allowed a percentage of adenomas to be systematically missed in successive FIT screening rounds. Both variants were adjusted to reproduce the first-round findings of the Dutch CORERO FIT screening trial. The authors then compared simulated detection rates in the second screening round with those observed, and adjusted the simulated percentage of systematically missed adenomas to those data. Finally, the authors calculated the impact of systematic false-negative FIT results on the effectiveness of repeated FIT screening. The model without systematic false-negativity simulated higher detection rates in the second screening round than observed. These observed rates could be reproduced when assuming that FIT systematically missed 26% of advanced and 73% of nonadvanced adenomas. To reduce the false-positive rate in the second round to the observed level, the authors also had to assume that 30% of false-positive findings were systematically false-positive. Systematic false-negative FIT testing limits the long-term reduction of biennial FIT screening in the incidence of colorectal cancer (35.6% vs 40.9%) and its mortality (55.2% vs 59.0%) in participants. The results of the current study provide convincing evidence based on the combination of real-life and modeling data that a percentage of adenomas are systematically missed by repeat FIT screening. This impairs the efficacy of FIT screening. Cancer 2016;122:1680-8. © 2016 American Cancer Society. © 2016 American Cancer Society.

  15. Statistical comparison of various interpolation algorithms for reconstructing regional grid ionospheric maps over China

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao

    2018-07-01

    This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.

  16. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    PubMed

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  17. Development and evaluation of social cognitive measures related to adolescent physical activity.

    PubMed

    Dewar, Deborah L; Lubans, David Revalds; Morgan, Philip James; Plotnikoff, Ronald C

    2013-05-01

    This study aimed to develop and evaluate the construct validity and reliability of modernized social cognitive measures relating to physical activity behaviors in adolescents. An instrument was developed based on constructs from Bandura's Social Cognitive Theory and included the following scales: self-efficacy, situation (perceived physical environment), social support, behavioral strategies, and outcome expectations and expectancies. The questionnaire was administered in a sample of 171 adolescents (age = 13.6 ± 1.2 years, females = 61%). Confirmatory factor analysis was employed to examine model-fit for each scale using multiple indices, including chi-square index, comparative-fit index (CFI), goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA). Reliability properties were also examined (ICC and Cronbach's alpha). Each scale represented a statistically sound measure: fit indices indicated each model to be an adequate-to-exact fit to the data; internal consistency was acceptable to good (α = 0.63-0.79); rank order repeatability was strong (ICC = 0.82-0.91). Results support the validity and reliability of social cognitive scales relating to physical activity among adolescents. As such, the developed scales have utility for the identification of potential social cognitive correlates of youth physical activity, mediators of physical activity behavior changes and the testing of theoretical models based on Social Cognitive Theory.

  18. A critique of Rasch residual fit statistics.

    PubMed

    Karabatsos, G

    2000-01-01

    In test analysis involving the Rasch model, a large degree of importance is placed on the "objective" measurement of individual abilities and item difficulties. The degree to which the objectivity properties are attained, of course, depends on the degree to which the data fit the Rasch model. It is therefore important to utilize fit statistics that accurately and reliably detect the person-item response inconsistencies that threaten the measurement objectivity of persons and items. Given this argument, it is somewhat surprising that there is far more emphasis placed in the objective measurement of person and items than there is in the measurement quality of Rasch fit statistics. This paper provides a critical analysis of the residual fit statistics of the Rasch model, arguably the most often used fit statistics, in an effort to illustrate that the task of Rasch fit analysis is not as simple and straightforward as it appears to be. The faulty statistical properties of the residual fit statistics do not allow either a convenient or a straightforward approach to Rasch fit analysis. For instance, given a residual fit statistic, the use of a single minimum critical value for misfit diagnosis across different testing situations, where the situations vary in sample and test properties, leads to both the overdetection and underdetection of misfit. To improve this situation, it is argued that psychometricians need to implement residual-free Rasch fit statistics that are based on the number of Guttman response errors, or use indices that are statistically optimal in detecting measurement disturbances.

  19. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  20. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  1. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  2. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  3. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  4. A Didactic Presentation of Snijders's "l[subscript z]*" Index of Person Fit with Emphasis on Response Model Selection and Ability Estimation

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles; Beland, Sebastien

    2012-01-01

    This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample…

  5. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  6. Associations of Sexual Victimization, Depression, and Sexual Assertiveness with Unprotected Sex: A Test of the Multifaceted Model of HIV Risk Across Gender

    PubMed Central

    Morokoff, Patricia J.; Redding, Colleen A.; Harlow, Lisa L.; Cho, Sookhyun; Rossi, Joseph S.; Meier, Kathryn S.; Mayer, Kenneth H.; Koblin, Beryl; Brown-Peterside, Pamela

    2014-01-01

    This study examined whether the Multifaceted Model of HIV Risk (MMOHR) would predict unprotected sex based on predictors including gender, childhood sexual abuse (CSA), sexual victimization (SV), depression, and sexual assertiveness for condom use. A community-based sample of 473 heterosexually active men and women, aged 18–46 years completed survey measures of model variables. Gender predicted several variables significantly. A separate model for women demonstrated excellent fit, while the model for men demonstrated reasonable fit. Multiple sample model testing supported the use of MMOHR in both men and women, while simultaneously highlighting areas of gender difference. Prevention interventions should focus on sexual assertiveness, especially for CSA and SV survivors, as well as targeting depression, especially among men. PMID:25018617

  7. Using resource modelling to inform decision making and service planning: the case of colorectal cancer screening in Ireland

    PubMed Central

    2013-01-01

    Background Organised colorectal cancer screening is likely to be cost-effective, but cost-effectiveness results alone may not help policy makers to make decisions about programme feasibility or service providers to plan programme delivery. For these purposes, estimates of the impact on the health services of actually introducing screening in the target population would be helpful. However, these types of analyses are rarely reported. As an illustration of such an approach, we estimated annual health service resource requirements and health outcomes over the first decade of a population-based colorectal cancer screening programme in Ireland. Methods A Markov state-transition model of colorectal neoplasia natural history was used. Three core screening scenarios were considered: (a) flexible sigmoidoscopy (FSIG) once at age 60, (b) biennial guaiac-based faecal occult blood tests (gFOBT) at 55–74 years, and (c) biennial faecal immunochemical tests (FIT) at 55–74 years. Three alternative FIT roll-out scenarios were also investigated relating to age-restricted screening (55–64 years) and staggered age-based roll-out across the 55–74 age group. Parameter estimates were derived from literature review, existing screening programmes, and expert opinion. Results were expressed in relation to the 2008 population (4.4 million people, of whom 700,800 were aged 55–74). Results FIT-based screening would deliver the greatest health benefits, averting 164 colorectal cancer cases and 272 deaths in year 10 of the programme. Capacity would be required for 11,095-14,820 diagnostic and surveillance colonoscopies annually, compared to 381–1,053 with FSIG-based, and 967–1,300 with gFOBT-based, screening. With FIT, in year 10, these colonoscopies would result in 62 hospital admissions for abdominal bleeding, 27 bowel perforations and one death. Resource requirements for pathology, diagnostic radiology, radiotherapy and colorectal resection were highest for FIT. Estimates depended on screening uptake. Alternative FIT roll-out scenarios had lower resource requirements. Conclusions While FIT-based screening would quite quickly generate attractive health outcomes, it has heavy resource requirements. These could impact on the feasibility of a programme based on this screening modality. Staggered age-based roll-out would allow time to increase endoscopy capacity to meet programme requirements. Resource modelling of this type complements conventional cost-effectiveness analyses and can help inform policy making and service planning. PMID:23510135

  8. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  9. Otolith reading and multi-model inference for improved estimation of age and growth in the gilthead seabream Sparus aurata (L.)

    NASA Astrophysics Data System (ADS)

    Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.

    2011-05-01

    Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model inference associated with separate analysis of juveniles and adult fish is then advised to obtain objective estimations of growth parameters when sampling cannot be corrected towards older fish.

  10. Methods of comparing associative models and an application to retrospective revaluation.

    PubMed

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  12. Study on residual discharge time of lead-acid battery based on fitting method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Wangwang; Jin, Yueqiang; Wang, Shuying

    2017-05-01

    This paper use the method of fitting to discuss the data of C problem of mathematical modeling in 2016, the residual discharge time model of lead-acid battery with 20A,30A,…,100A constant current discharge is obtained, and the discharge time model of discharge under arbitrary constant current is presented. The mean relative error of the model is calculated to be about 3%, which shows that the model has high accuracy. This model can provide a basis for optimizing the adaptation of power system to the electrical motor vehicle.

  13. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  15. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  16. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    PubMed

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  17. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  18. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.

  19. Modeling study of seated reach envelopes based on spherical harmonics with consideration of the difficulty ratings.

    PubMed

    Yu, Xiaozhi; Ren, Jindong; Zhang, Qian; Liu, Qun; Liu, Honghao

    2017-04-01

    Reach envelopes are very useful for the design and layout of controls. In building reach envelopes, one of the key problems is to represent the reach limits accurately and conveniently. Spherical harmonics are proved to be accurate and convenient method for fitting of the reach capability envelopes. However, extensive study are required on what components of spherical harmonics are needed in fitting the envelope surfaces. For applications in the vehicle industry, an inevitable issue is to construct reach limit surfaces with consideration of the seating positions of the drivers, and it is desirable to use population envelopes rather than individual envelopes. However, it is relatively inconvenient to acquire reach envelopes via a test considering the seating positions of the drivers. In addition, the acquired envelopes are usually unsuitable for use with other vehicle models because they are dependent on the current cab packaging parameters. Therefore, it is of great significance to construct reach envelopes for real vehicle conditions based on individual capability data considering seating positions. Moreover, traditional reach envelopes provide little information regarding the assessment of reach difficulty. The application of reach envelopes will improve design quality by providing difficulty-rating information about reach operations. In this paper, using the laboratory data of seated reach with consideration of the subjective difficulty ratings, the method of modeling reach envelopes is studied based on spherical harmonics. The surface fitting using spherical harmonics is conducted for circumstances both with and without seat adjustments. For use with adjustable seat, the seating position model is introduced to re-locate the test data. The surface fitting is conducted for both population and individual reach envelopes, as well as for boundary envelopes. Comparison of the envelopes of adjustable seat and the SAE J287 control reach envelope shows that the latter is nearly at the middle difficulty level. It is also found that the abilities of reach envelope models in expressing the shape of the reach limits based on spherical harmonics depends both on the terms in the model expression and on the data used to fit the envelope surfaces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  1. Process-based models are required to manage ecological systems in a changing world

    Treesearch

    K. Cuddington; M.-J. Fortin; L.R. Gerber; A. Hastings; A. Liebhold; M. OConnor; C. Ray

    2013-01-01

    Several modeling approaches can be used to guide management decisions. However, some approaches are better fitted than others to address the problem of prediction under global change. Process-based models, which are based on a theoretical understanding of relevant ecological processes, provide a useful framework to incorporate specific responses to altered...

  2. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  3. Model of the final borehole geometry for helical laser drilling

    NASA Astrophysics Data System (ADS)

    Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas

    2018-05-01

    A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.

  4. In Search of Golden Rules: Comment on Hypothesis-Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler's (1999) Findings

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin

    2004-01-01

    Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…

  5. Absolute Spectrophotometric Calibration to 1% from the FUV through the near-IR

    NASA Astrophysics Data System (ADS)

    Finley, David

    2005-07-01

    We propose a significant improvement to the existing HST calibration. The current calibration is based on three primary DA white dwarf standards, GD 71, GD 153, and G 191-B2B. The standard fluxes are calculated using NLTE models, with effective temperatures and gravities that were derived from Balmer line fits using LTE models. We propose to improve the accuracy and internal consistency of the calibration by deriving corrected effective temperatures and gravities based on fitting the observed line profiles with updated NLTE models, and including the fit results from multiple STIS spectra, rather than the {usually} 1 or 2 ground-based spectra used previously. We will also determine the fluxes for 5 new, fainter primary or secondary standards, extending the standard V magnitude lower limit from 13.4 to 16.5, and extending the wavelength coverage from 0.1 to 2.5 micron. The goal is to achieve an overall flux accuracy of 1%, which will be needed, for example, for the upcoming supernova survey missions to measure the equation of state of the dark energy that is accelerating the expansion of the universe.

  6. Fitting a circular distribution based on nonnegative trigonometric sums for wind direction in Malaysia

    NASA Astrophysics Data System (ADS)

    Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Zaharim, Azami; Sopian, Kamaruzzaman

    2015-02-01

    Wind direction has a substantial effect on the environment and human lives. As examples, the wind direction influences the dispersion of particulate matter in the air and affects the construction of engineering structures, such as towers, bridges, and tall buildings. Therefore, a statistical analysis of the wind direction provides important information about the wind regime at a particular location. In addition, knowledge of the wind direction and wind speed can be used to derive information about the energy potential. This study investigated the characteristics of the wind regime of Mersing, Malaysia. A circular distribution based on Nonnegative Trigonometric Sums (NNTS) was fitted to a histogram of the average hourly wind direction data. The Newton-like manifold algorithm was used to estimate the parameter of each component of the NNTS model. Next, the suitability of each NNTS model was judged based on a graphical representation and Akaike's Information Criteria. The study found that the NNTS model with six or more components was able to fit the wind directional data for the Mersing station.

  7. Soil mechanics: breaking ground.

    PubMed

    Einav, Itai

    2007-12-15

    In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.

  8. Lee-Carter state space modeling: Application to the Malaysia mortality data

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-06-01

    This article presents an approach that formalizes the Lee-Carter (LC) model as a state space model. Maximum likelihood through Expectation-Maximum (EM) algorithm was used to estimate the model. The methodology is applied to Malaysia's total population mortality data. Malaysia's mortality data was modeled based on age specific death rates (ASDR) data from 1971-2009. The fitted ASDR are compared to the actual observed values. However, results from the comparison of the fitted and actual values between LC-SS model and the original LC model shows that the fitted values from the LC-SS model and original LC model are quite close. In addition, there is not much difference between the value of root mean squared error (RMSE) and Akaike information criteria (AIC) from both models. The LC-SS model estimated for this study can be extended for forecasting ASDR in Malaysia. Then, accuracy of the LC-SS compared to the original LC can be further examined by verifying the forecasting power using out-of-sample comparison.

  9. An Outcome-Based Action Study on Changes in Fitness, Blood Lipids, and Exercise Adherence, Using the Disconnected Values (Intervention) Model

    ERIC Educational Resources Information Center

    Anshel, Mark H.; Kang, Minsoo

    2007-01-01

    The authors' purpose in this action study was to examine the effect of a 10-week intervention, using the Disconnected Values Model (DVM), on changes in selected measures of fitness, blood lipids, and exercise adherence among 51 university faculty (10 men and 41 women) from a school in the southeastern United States. The DVM is an intervention…

  10. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  11. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  12. Application of an OCT data-based mathematical model of the foveal pit in Parkinson disease.

    PubMed

    Ding, Yin; Spund, Brian; Glazman, Sofya; Shrier, Eric M; Miri, Shahnaz; Selesnick, Ivan; Bodis-Wollner, Ivan

    2014-11-01

    Spectral-domain Optical coherence tomography (OCT) has shown remarkable utility in the study of retinal disease and has helped to characterize the fovea in Parkinson disease (PD) patients. We developed a detailed mathematical model based on raw OCT data to allow differentiation of foveae of PD patients from healthy controls. Of the various models we tested, a difference of a Gaussian and a polynomial was found to have "the best fit". Decision was based on mathematical evaluation of the fit of the model to the data of 45 control eyes versus 50 PD eyes. We compared the model parameters in the two groups using receiver-operating characteristics (ROC). A single parameter discriminated 70 % of PD eyes from controls, while using seven of the eight parameters of the model allowed 76 % to be discriminated. The future clinical utility of mathematical modeling in study of diffuse neurodegenerative conditions that also affect the fovea is discussed.

  13. Blowout Jets: Hinode X-Ray Jets that Don't Fit the Standard Model

    NASA Technical Reports Server (NTRS)

    Moore, Ronald L.; Cirtain, Jonathan W.; Sterling, Alphonse C.; Falconer, David A.

    2010-01-01

    Nearly half of all H-alpha macrospicules in polar coronal holes appear to be miniature filament eruptions. This suggests that there is a large class of X-ray jets in which the jet-base magnetic arcade undergoes a blowout eruption as in a CME, instead of remaining static as in most solar X-ray jets, the standard jets that fit the model advocated by Shibata. Along with a cartoon depicting the standard model, we present a cartoon depicting the signatures expected of blowout jets in coronal X-ray images. From Hinode/XRT movies and STEREO/EUVI snapshots in polar coronal holes, we present examples of (1) X-ray jets that fit the standard model, and (2) X-ray jets that do not fit the standard model but do have features appropriate for blowout jets. These features are (1) a flare arcade inside the jet-base arcade in addition to the small flare arcade (bright point) outside that standard jets have, (2) a filament of cool (T is approximately 80,000K) plasma that erupts from the core of the jetbase arcade, and (3) an extra jet strand that should not be made by the reconnection for standard jets but could be made by reconnection between the ambient unipolar open field and the opposite-polarity leg of the filament-carrying flux-rope core field of the erupting jet-base arcade. We therefore infer that these non-standard jets are blowout jets, jets made by miniature versions of the sheared-core-arcade eruptions that make CMEs

  14. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  15. An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach

    NASA Astrophysics Data System (ADS)

    Chu, A.; Ogata, Y.; Katsura, K.

    2010-12-01

    For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.

  16. Active Contours Driven by Multi-Feature Gaussian Distribution Fitting Energy with Application to Vessel Segmentation.

    PubMed

    Wang, Lei; Zhang, Huimao; He, Kan; Chang, Yan; Yang, Xiaodong

    2015-01-01

    Active contour models are of great importance for image segmentation and can extract smooth and closed boundary contours of the desired objects with promising results. However, they cannot work well in the presence of intensity inhomogeneity. Hence, a novel region-based active contour model is proposed by taking image intensities and 'vesselness values' from local phase-based vesselness enhancement into account simultaneously to define a novel multi-feature Gaussian distribution fitting energy in this paper. This energy is then incorporated into a level set formulation with a regularization term for accurate segmentations. Experimental results based on publicly available STructured Analysis of the Retina (STARE) demonstrate our model is more accurate than some existing typical methods and can successfully segment most small vessels with varying width.

  17. Understanding Host-Switching by Ecological Fitting

    PubMed Central

    Araujo, Sabrina B. L.; Braga, Mariana Pires; Brooks, Daniel R.; Agosta, Salvatore J.; Hoberg, Eric P.; von Hartenthal, Francisco W.; Boeger, Walter A.

    2015-01-01

    Despite the fact that parasites are highly specialized with respect to their hosts, empirical evidence demonstrates that host switching rather than co-speciation is the dominant factor influencing the diversification of host-parasite associations. Ecological fitting in sloppy fitness space has been proposed as a mechanism allowing ecological specialists to host-switch readily. That proposal is tested herein using an individual-based model of host switching. The model considers a parasite species exposed to multiple host resources. Through time host range expansion can occur readily without the prior evolution of novel genetic capacities. It also produces non-linear variation in the size of the fitness space. The capacity for host colonization is strongly influenced by propagule pressure early in the process and by the size of the fitness space later. The simulations suggest that co-adaptation may be initiated by the temporary loss of less fit phenotypes. Further, parasites can persist for extended periods in sub-optimal hosts, and thus may colonize distantly related hosts by a "stepping-stone" process. PMID:26431199

  18. Fuzzy Analytic Hierarchy Process-based Chinese Resident Best Fitness Behavior Method Research.

    PubMed

    Wang, Dapeng; Zhang, Lan

    2015-01-01

    With explosive development in Chinese economy and science and technology, people's pursuit of health becomes more and more intense, therefore Chinese resident sports fitness activities have been rapidly developed. However, different fitness events popularity degrees and effects on body energy consumption are different, so bases on this, the paper researches on fitness behaviors and gets Chinese residents sports fitness behaviors exercise guide, which provides guidance for propelling to national fitness plan's implementation and improving Chinese resident fitness scientization. The paper starts from the perspective of energy consumption, it mainly adopts experience method, determines Chinese resident favorite sports fitness event energy consumption through observing all kinds of fitness behaviors energy consumption, and applies fuzzy analytic hierarchy process to make evaluation on bicycle riding, shadowboxing practicing, swimming, rope skipping, jogging, running, aerobics these seven fitness events. By calculating fuzzy rate model's membership and comparing their sizes, it gets fitness behaviors that are more helpful for resident health, more effective and popular. Finally, it gets conclusions that swimming is a best exercise mode and its membership is the highest. Besides, the memberships of running, rope skipping and shadowboxing practicing are also relative higher. It should go in for bodybuilding by synthesizing above several kinds of fitness events according to different physical conditions; different living conditions so that can better achieve the purpose of fitness exercises.

  19. Twitter classification model: the ABC of two million fitness tweets.

    PubMed

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  20. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    PubMed

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Modeling dust emission in the Magellanic Clouds with Spitzer and Herschel

    NASA Astrophysics Data System (ADS)

    Chastenet, Jérémy; Bot, Caroline; Gordon, Karl D.; Bocchio, Marco; Roman-Duval, Julia; Jones, Anthony P.; Ysard, Nathalie

    2017-05-01

    Context. Dust modeling is crucial to infer dust properties and budget for galaxy studies. However, there are systematic disparities between dust grain models that result in corresponding systematic differences in the inferred dust properties of galaxies. Quantifying these systematics requires a consistent fitting analysis. Aims: We compare the output dust parameters and assess the differences between two dust grain models, the DustEM model and THEMIS. In this study, we use a single fitting method applied to all the models to extract a coherent and unique statistical analysis. Methods: We fit the models to the dust emission seen by Spitzer and Herschel in the Small and Large Magellanic Clouds (SMC and LMC). The observations cover the infrared (IR) spectrum from a few microns to the sub-millimeter range. For each fitted pixel, we calculate the full n-D likelihood based on a previously described method. The free parameters are both environmental (U, the interstellar radiation field strength; αISRF, power-law coefficient for a multi-U environment; Ω∗, the starlight strength) and intrinsic to the model (YI: abundances of the grain species I; αsCM20, coefficient in the small carbon grain size distribution). Results: Fractional residuals of five different sets of parameters show that fitting THEMIS brings a more accurate reproduction of the observations than the DustEM model. However, independent variations of the dust species show strong model-dependencies. We find that the abundance of silicates can only be constrained to an upper-limit and that the silicate/carbon ratio is different than that seen in our Galaxy. In the LMC, our fits result in dust masses slightly lower than those found in the literature, by a factor lower than 2. In the SMC, we find dust masses in agreement with previous studies.

  2. Efficient occupancy model-fitting for extensive citizen-science data.

    PubMed

    Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists.

  3. Efficient occupancy model-fitting for extensive citizen-science data

    PubMed Central

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists. PMID:28328937

  4. Left ventricle segmentation via two-layer level sets with circular shape constraint.

    PubMed

    Yang, Cong; Wu, Weiguo; Su, Yuanqi; Zhang, Shaoxiang

    2017-05-01

    This paper proposes a circular shape constraint and a novel two-layer level set method for the segmentation of the left ventricle (LV) from short-axis magnetic resonance images without training any shape models. Since the shape of LV throughout the apex-base axis is close to a ring shape, we propose a circle fitting term in the level set framework to detect the endocardium. The circle fitting term imposes a penalty on the evolving contour from its fitting circle, and thereby handles quite well with issues in LV segmentation, especially the presence of outflow track in basal slices and the intensity overlap between TPM and the myocardium. To extract the whole myocardium, the circle fitting term is incorporated into two-layer level set method. The endocardium and epicardium are respectively represented by two specified level contours of the level set function, which are evolved by an edge-based and a region-based active contour model. The proposed method has been quantitatively validated on the public data set from MICCAI 2009 challenge on the LV segmentation. Experimental results and comparisons with state-of-the-art demonstrate the accuracy and robustness of our method. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  6. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    ERIC Educational Resources Information Center

    Weiss, Brandi A.; Dardick, William

    2016-01-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…

  7. A fast, model-independent method for cerebral cortical thickness estimation using MRI.

    PubMed

    Scott, M L J; Bromiley, P A; Thacker, N A; Hutchinson, C E; Jackson, A

    2009-04-01

    Several algorithms for measuring the cortical thickness in the human brain from MR image volumes have been described in the literature, the majority of which rely on fitting deformable models to the inner and outer cortical surfaces. However, the constraints applied during the model fitting process in order to enforce spherical topology and to fit the outer cortical surface in narrow sulci, where the cerebrospinal fluid (CSF) channel may be obscured by partial voluming, may introduce bias in some circumstances, and greatly increase the processor time required. In this paper we describe an alternative, voxel based technique that measures the cortical thickness using inversion recovery anatomical MR images. Grey matter, white matter and CSF are identified through segmentation, and edge detection is used to identify the boundaries between these tissues. The cortical thickness is then measured along the local 3D surface normal at every voxel on the inner cortical surface. The method was applied to 119 normal volunteers, and validated through extensive comparisons with published measurements of both cortical thickness and rate of thickness change with age. We conclude that the proposed technique is generally faster than deformable model-based alternatives, and free from the possibility of model bias, but suffers no reduction in accuracy. In particular, it will be applicable in data sets showing severe cortical atrophy, where thinning of the gyri leads to points of high curvature, and so the fitting of deformable models is problematic.

  8. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  10. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  11. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

  12. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    PubMed

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  13. Whole Protein Native Fitness Potentials

    NASA Astrophysics Data System (ADS)

    Faraggi, Eshel; Kloczkowski, Andrzej

    2013-03-01

    Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.

  14. Separation mechanism of nortriptyline and amytriptyline in RPLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritti, Fabrice; Guiochon, Georges A

    2005-08-01

    The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less

  15. Comparison of a layered slab and an atlas head model for Monte Carlo fitting of time-domain near-infrared spectroscopy data of the adult head

    PubMed Central

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2014-01-01

    Abstract. Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject’s head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers. PMID:24407503

  16. DSM-5 and ICD-11 as competing models of PTSD in preadolescent children exposed to a natural disaster: assessing validity and co-occurring symptomatology

    PubMed Central

    La Greca, Annette M.; Danzi, BreAnne A.; Chan, Sherilynn F.

    2017-01-01

    ABSTRACT Background: Major revisions have been made to the DSM and ICD models of post-traumatic stress disorder (PTSD). However, it is not known whether these models fit children’s post-trauma responses, even though children are a vulnerable population following disasters. Objective: Using data from Hurricane Ike, we examined how well trauma-exposed children’s symptoms fit the DSM-IV, DSM-5 and ICD-11 models, and whether the models varied by gender. We also evaluated whether elevated symptoms of depression and anxiety characterized children meeting PTSD criteria based on DSM-5 and ICD-11. Method: Eight-months post-disaster, children (N = 327, 7–11 years) affected by Hurricane Ike completed measures of PTSD, anxiety and depression. Algorithms approximated a PTSD diagnosis based on DSM-5 and ICD-11 models. Results: Using confirmatory factor analysis, ICD-11 had the best-fitting model, followed by DSM-IV and DSM-5. The ICD-11 model also demonstrated strong measurement invariance across gender. Analyses revealed poor overlap between DSM-5 and ICD-11, although children meeting either set of criteria reported severe PTSD symptoms. Further, children who met PTSD criteria for DSM-5, but not for ICD-11, reported significantly higher levels of depression and general anxiety than children not meeting DSM-5 criteria. Conclusions: Findings support the parsimonious ICD-11 model of PTSD for trauma-exposed children, although adequate fit also was obtained for DSM-5. Use of only one model of PTSD, be it DSM-5 or ICD-11, will likely miss children with significant post-traumatic stress. DSM-5 may identify children with high levels of comorbid symptomatology, which may require additional clinical intervention. PMID:28451076

  17. DSM-5 and ICD-11 as competing models of PTSD in preadolescent children exposed to a natural disaster: assessing validity and co-occurring symptomatology.

    PubMed

    La Greca, Annette M; Danzi, BreAnne A; Chan, Sherilynn F

    2017-01-01

    Background : Major revisions have been made to the DSM and ICD models of post-traumatic stress disorder (PTSD). However, it is not known whether these models fit children's post-trauma responses, even though children are a vulnerable population following disasters. Objective : Using data from Hurricane Ike, we examined how well trauma-exposed children's symptoms fit the DSM-IV, DSM-5 and ICD-11 models, and whether the models varied by gender. We also evaluated whether elevated symptoms of depression and anxiety characterized children meeting PTSD criteria based on DSM-5 and ICD-11. Method : Eight-months post-disaster, children ( N  = 327, 7-11 years) affected by Hurricane Ike completed measures of PTSD, anxiety and depression. Algorithms approximated a PTSD diagnosis based on DSM-5 and ICD-11 models. Results : Using confirmatory factor analysis, ICD-11 had the best-fitting model, followed by DSM-IV and DSM-5. The ICD-11 model also demonstrated strong measurement invariance across gender. Analyses revealed poor overlap between DSM-5 and ICD-11, although children meeting either set of criteria reported severe PTSD symptoms. Further, children who met PTSD criteria for DSM-5, but not for ICD-11, reported significantly higher levels of depression and general anxiety than children not meeting DSM-5 criteria. Conclusions : Findings support the parsimonious ICD-11 model of PTSD for trauma-exposed children, although adequate fit also was obtained for DSM-5. Use of only one model of PTSD, be it DSM-5 or ICD-11, will likely miss children with significant post-traumatic stress. DSM-5 may identify children with high levels of comorbid symptomatology, which may require additional clinical intervention.

  18. [Radiance Simulation of BUV Hyperspectral Sensor on Multi Angle Observation, and Improvement to Initial Total Ozone Estimating Model of TOMS V8 Total Ozone Algorithm].

    PubMed

    Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun

    2015-11-01

    New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.

  19. Estimation of a super-resolved PSF for the data reduction of undersampled stellar observations. Deriving an accurate model for fitting photometry with Corot space telescope

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.

    2006-06-01

    Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled otherwise. For MOST, whose direct-imaging PSF is closer to the ordinary, comparison to other models or photometry techniques were carried out and confirmed the potential of PSF reconstruction in real observational conditions.

  20. Flow Channel Influence of a Collision-Based Piezoelectric Jetting Dispenser on Jet Performance

    PubMed Central

    Deng, Guiling; Li, Junhui; Duan, Ji’an

    2018-01-01

    To improve the jet performance of a bi-piezoelectric jet dispenser, mathematical and simulation models were established according to the operating principle. In order to improve the accuracy and reliability of the simulation calculation, a viscosity model of the fluid was fitted to a fifth-order function with shear rate based on rheological test data, and the needle displacement model was fitted to a nine-order function with time based on real-time displacement test data. The results show that jet performance is related to the diameter of the nozzle outlet and the cone angle of the nozzle, and the impacts of the flow channel structure were confirmed. The approach of numerical simulation is confirmed by the testing results of droplet volume. It will provide a reliable simulation platform for mechanical collision-based jet dispensing and a theoretical basis for micro jet valve design and improvement. PMID:29677140

  1. PARCS: A Safety Net Community-Based Fitness Center for Low-Income Adults

    PubMed Central

    Keith, NiCole; de Groot, Mary; Mi, Deming; Alexander, Kisha; Kaiser, Stephanie

    2015-01-01

    Background Physical activity (PA) and fitness are critical to maintaining health and avoiding chronic disease. Limited access to fitness facilities in low-income urban areas has been identified as a contributor to low PA participation and poor fitness. Objectives This research describes community-based fitness centers established for adults living in low-income, urban communities and characterizes a sample of its members. Methods The community identified a need for physical fitness opportunities to improve residents’ health. Three community high schools were host sites. Resources were combined to renovate and staff facilities, acquire equipment, and refer patients to exercise. The study sample included 170 members ≥ age 18yr who completed demographic, exercise self-efficacy, and quality of life surveys and a fitness evaluation. Neighborhood-level U.S. Census data were obtained for comparison. Results The community-based fitness centers resulted from university, public school, and hospital partnerships offering safe, accessible, and affordable exercise opportunities. The study sample mean BMI was 35 ± 7.6 (Class II obesity), mean age was 50yr ± 12.5, 66% were black, 72% were female, 66% completed some college or greater, and 71% had an annual household income < $25K and supported 2.2 dependents. Participants had moderate confidence for exercise participation and low fitness levels. When compared to census data, participants were representative of their communities. Conclusion This observational study reveals a need for affordable fitness centers for low-income adults. We demonstrate a model where communities and organizations strategically leverage resources to address disparities in physical fitness and health. PMID:27346764

  2. Improvements in prevalence trend fitting and incidence estimation in EPP 2013

    PubMed Central

    Brown, Tim; Bao, Le; Eaton, Jeffrey W.; Hogan, Daniel R.; Mahy, Mary; Marsh, Kimberly; Mathers, Bradley M.; Puckett, Robert

    2014-01-01

    Objective: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. Methods: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. Results: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. Conclusion: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning. PMID:25406747

  3. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be; Van den Bergh, Laura; Al-Mamgani, Abrahim

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including themore » most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable prediction models were obtained with LKB, RS, and logistic NTCP models. Including clinical factors improved the predictive power of all models significantly.« less

  4. Micromechanics based phenomenological damage modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muju, S.; Anderson, P.M.; Popelar, C.H.

    A model is developed for the study of process zone effects on dominant cracks. The model proposed here is intended to bridge the gap between the micromechanics based and the phenomenological models for the class of problems involving microcracking, transforming inclusions etc. It is based on representation of localized eigenstrains using dislocation dipoles. The eigenstrain (fitting strain) is represented as the strength (Burgers vector) of the dipole which obeys a certain phenomenological constitutive relation.

  5. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  6. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    NASA Astrophysics Data System (ADS)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.

  7. Accuracy in breast shape alignment with 3D surface fitting algorithms.

    PubMed

    Riboldi, Marco; Gierga, David P; Chen, George T Y; Baroni, Guido

    2009-04-01

    Surface imaging is in use in radiotherapy clinical practice for patient setup optimization and monitoring. Breast alignment is accomplished by searching for a tentative spatial correspondence between the reference and daily surface shape models. In this study, the authors quantify whole breast shape alignment by relying on texture features digitized on 3D surface models. Texture feature localization was validated through repeated measurements in a silicone breast phantom, mounted on a high precision mechanical stage. Clinical investigations on breast shape alignment included 133 fractions in 18 patients treated with accelerated partial breast irradiation. The breast shape was detected with a 3D video based surface imaging system so that breathing was compensated. An in-house algorithm for breast alignment, based on surface fitting constrained by nipple matching (constrained surface fitting), was applied. Results were compared with a commercial software where no constraints are utilized (unconstrained surface fitting). Texture feature localization was validated within 2 mm in each anatomical direction. Clinical data show that unconstrained surface fitting achieves adequate accuracy in most cases, though nipple mismatch is considerably higher than residual surface distances (3.9 mm vs 0.6 mm on average). Outliers beyond 1 cm can be experienced as the result of a degenerate surface fit, where unconstrained surface fitting is not sufficient to establish spatial correspondence. In the constrained surface fitting algorithm, average surface mismatch within 1 mm was obtained when nipple position was forced to match in the [1.5; 5] mm range. In conclusion, optimal results can be obtained by trading off the desired overall surface congruence vs matching of selected landmarks (constraint). Constrained surface fitting is put forward to represent an improvement in setup accuracy for those applications where whole breast positional reproducibility is an issue.

  8. Tumor Control Probability Modeling for Stereotactic Body Radiation Therapy of Early-Stage Lung Cancer Using Multiple Bio-physical Models

    PubMed Central

    Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen

    2017-01-01

    Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671

  9. Fitting the Incidence Data from the City of Campinas, Brazil, Based on Dengue Transmission Modellings Considering Time-Dependent Entomological Parameters

    PubMed Central

    Yang, Hyun Mo; Boldrini, José Luiz; Fassoni, Artur César; Freitas, Luiz Fernando Souza; Gomez, Miller Ceron; de Lima, Karla Katerine Barboza; Andrade, Valmir Roberto; Freitas, André Ricardo Ribas

    2016-01-01

    Four time-dependent dengue transmission models are considered in order to fit the incidence data from the City of Campinas, Brazil, recorded from October 1st 1995 to September 30th 2012. The entomological parameters are allowed to depend on temperature and precipitation, while the carrying capacity and the hatching of eggs depend only on precipitation. The whole period of incidence of dengue is split into four periods, due to the fact that the model is formulated considering the circulation of only one serotype. Dengue transmission parameters from human to mosquito and mosquito to human are fitted for each one of the periods. The time varying partial and overall effective reproduction numbers are obtained to explain the incidence of dengue provided by the models. PMID:27010654

  10. Cardiovascular fitness in late adolescent males and later risk of serious non-affective mental disorders: a prospective, population-based study.

    PubMed

    Nyberg, J; Henriksson, M; Åberg, M A I; Rosengren, A; Söderberg, M; Åberg, N D; Kuhn, H G; Waern, M

    2018-02-01

    Cardiovascular fitness in late adolescence is associated with future risk of depression. Relationships with other mental disorders need elucidation. This study investigated whether fitness in late adolescence is associated with future risk of serious non-affective mental disorders. Further, we examined how having an affected brother might impact the relationship. Prospective, population-based cohort study of 1 109 786 Swedish male conscripts with no history of mental illness, who underwent conscription examinations at age 18 between 1968 and 2005. Cardiovascular fitness was objectively measured at conscription using a bicycle ergometer test. During the follow-up (3-42 years), incident cases of serious non-affective mental disorders (schizophrenia and schizophrenia-like disorders, other psychotic disorders and neurotic, stress-related and somatoform disorders) were identified through the Swedish National Hospital Discharge Register. Cox proportional hazards models were used to assess the influence of cardiovascular fitness at conscription and risk of serious non-affective mental disorders later in life. Low fitness was associated with increased risk for schizophrenia and schizophrenia-like disorders [hazard ratio (HR) 1.44, 95% confidence interval (CI) 1.29-1.61], other psychotic disorders (HR 1.41, 95% CI 1.27-1.56), and neurotic or stress-related and somatoform disorders (HR 1.45, 95% CI 1.37-1.54). Relationships persisted in models that included illness in brothers. Lower fitness in late adolescent males is associated with increased risk of serious non-affective mental disorders in adulthood.

  11. Evaluating models of remember-know judgments: complexity, mimicry, and discriminability.

    PubMed

    Cohen, Andrew L; Rotello, Caren M; Macmillan, Neil A

    2008-10-01

    Remember-know judgments provide additional information in recognition memory tests, but the nature of this information and the attendant decision process are in dispute. Competing models have proposed that remember judgments reflect a sum of familiarity and recollective information (the one-dimensional model), are based on a difference between these strengths (STREAK), or are purely recollective (the dual-process model). A choice among these accounts is sometimes made by comparing the precision of their fits to data, but this strategy may be muddied by differences in model complexity: Some models that appear to provide good fits may simply be better able to mimic the data produced by other models. To evaluate this possibility, we simulated data with each of the models in each of three popular remember-know paradigms, then fit those data to each of the models. We found that the one-dimensional model is generally less complex than the others, but despite this handicap, it dominates the others as the best-fitting model. For both reasons, the one-dimensional model should be preferred. In addition, we found that some empirical paradigms are ill-suited for distinguishing among models. For example, data collected by soliciting remember/know/new judgments--that is, the trinary task--provide a particularly weak ground for distinguishing models. Additional tables and figures may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, at www.psychonomic.org/archive.

  12. Discounting of reward sequences: a test of competing formal models of hyperbolic discounting

    PubMed Central

    Zarr, Noah; Alexander, William H.; Brown, Joshua W.

    2014-01-01

    Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662

  13. What Goes Around Comes Around … Or Does It? Disrupting the Cycle of Traditional, Sport-Based Physical Education

    PubMed Central

    Ennis, Catherine D.

    2015-01-01

    As typically taught, sport-based, multiactivity approaches to physical education provide students with few opportunities to increase their skill, fitness, or understanding. Alternative curriculum models, such as Sport Education, Teaching Games for Understanding, and Fitness for Life, represent a second generation of models that build on strong statements of democratic, student-centered practice in physical education. In the What Goes Around section of the paper, I discuss the U.S. perspective on the origins of alternative physical education curriculum models introduced in the early and mid-20th century as a response to sport and exercise programs of the times. Today, with the help of physical educators, scholars are conducting research to test new curricular alternatives or prototypes to provide evidence-based support for these models. Yet, the multiactivity, sport-based curriculum continues to dominate in most U.S. physical education classes. I discuss reasons for this dogged persistence and propose reforms to disrupt this pervasive pattern in the future. PMID:25960937

  14. An adjoint-based method for a linear mechanically-coupled tumor model: application to estimate the spatial variation of murine glioma growth based on diffusion weighted magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.

    2018-06-01

    We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.

  15. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  16. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  17. Accounting for seasonal patterns in syndromic surveillance data for outbreak detection.

    PubMed

    Burr, Tom; Graves, Todd; Klamann, Richard; Michalak, Sarah; Picard, Richard; Hengartner, Nicolas

    2006-12-04

    Syndromic surveillance (SS) can potentially contribute to outbreak detection capability by providing timely, novel data sources. One SS challenge is that some syndrome counts vary with season in a manner that is not identical from year to year. Our goal is to evaluate the impact of inconsistent seasonal effects on performance assessments (false and true positive rates) in the context of detecting anomalous counts in data that exhibit seasonal variation. To evaluate the impact of inconsistent seasonal effects, we injected synthetic outbreaks into real data and into data simulated from each of two models fit to the same real data. Using real respiratory syndrome counts collected in an emergency department from 2/1/94-5/31/03, we varied the length of training data from one to eight years, applied a sequential test to the forecast errors arising from each of eight forecasting methods, and evaluated their detection probabilities (DP) on the basis of 1000 injected synthetic outbreaks. We did the same for each of two corresponding simulated data sets. The less realistic, nonhierarchical model's simulated data set assumed that "one season fits all," meaning that each year's seasonal peak has the same onset, duration, and magnitude. The more realistic simulated data set used a hierarchical model to capture violation of the "one season fits all" assumption. This experiment demonstrated optimistic bias in DP estimates for some of the methods when data simulated from the nonhierarchical model was used for DP estimation, thus suggesting that at least for some real data sets and methods, it is not adequate to assume that "one season fits all." For the data we analyze, the "one season fits all " assumption is violated, and DP performance claims based on simulated data that assume "one season fits all," for the forecast methods considered, except for moving average methods, tend to be optimistic. Moving average methods based on relatively short amounts of training data are competitive on all three data sets, but are particularly competitive on the real data and on data from the hierarchical model, which are the two data sets that violate the "one season fits all" assumption.

  18. Attributes of top elite team-handball players.

    PubMed

    Massuça, Luís M; Fragoso, Isabel; Teles, Júlia

    2014-01-01

    Researchers in the field of excellence in sport performance are becoming increasingly focused on the study of sport-specific characteristics and requirements. In accordance with this, the purposes of this study were (a) to examine the morphologic-, fitness-, handball-specific skills and psychological and "biosocial" differences between top elite and nontop elite team-handball players and (b) to investigate the extent to which they may be used to identify top elite team-handball players. One hundred sixty-seven adult male team-handball players were studied and divided in 2 groups: top elite (n = 41) and nontop elite (n = 126). Twenty-eight morphologic-, 9 fitness-, 1 handball-specific skills and 2 psychological-based and 2 "biosocial"-based attributes were used. Top elite and nontop elite groups were compared for each variable of interest using Student's t-test, and 5 logistic regression analyses were performed with the athlete's performance group (top elite or nontop elite) as the dependent variable and the variables of each category as predictors. The results showed that (a) body mass, waist girth, radiale-dactylion length, midstylion-dactylion length, and absolute muscle mass (morphologic model); (b) 30-m sprint time, countermovement jump height and average power, abdominal strength and the class of performance in the Yo-Yo Intermittent Endurance Test (fitness model); (c) offensive power (specific-skills model); (d) ego-based motivational orientation (psychological model); (e) socioeconomic status and the energy spent (for week) in handball activity (biosocial model); significantly (p < 0.05) contributed to predict the probability of an athlete to be a top elite team-handball player. Moreover, the fitness model exhibited higher percentages of correct classification (i.e., 91.5%) than all the other models did. This study provided (a) the rational to reduce the battery of tests for evaluation purposes, and (b) the initial step to work on building a multidisciplinary model to predict the probability of a handball athlete to be a top elite player.

  19. Core overshoot and convection in δ Scuti and γ Doradus stars

    NASA Astrophysics Data System (ADS)

    Lovekin, Catherine; Guzik, Joyce A.

    2017-09-01

    The effects of rotation on pulsation in δ Scuti and γ Doradus stars are poorly understood. Stars in this mass range span the transition from convective envelopes to convective cores, and realistic models of convection are thus a key part of understanding these stars. In this work, we use 2D asteroseismic modelling of 5 stars observed with the Kepler spacecraft to provide constraints on the age, mass, rotation rate, and convective core overshoot. We use Period04 to calculate the frequencies based on short cadence Kepler observations of five γ Doradus and δ Scuti stars. We fit these stars with rotating models calculated using MESA and adiabatic pulsation frequencies calculated with GYRE. Comparison of these models with the pulsation frequencies of three stars observed with Kepler allowed us to place constraints on the age, mass, and rotation rate of these stars. All frequencies not identified as possible combinations were compared to theoretical frequencies calculated using models including the effects of rotation and overshoot. The best fitting models for all five stars are slowly rotating at the best fitting age and have moderate convective core overshoot. In this work, we will discuss the results of the frequency extraction and fitting process.

  20. Potential formulation of sleep dynamics

    NASA Astrophysics Data System (ADS)

    Phillips, A. J. K.; Robinson, P. A.

    2009-02-01

    A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.

  1. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  2. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    PubMed

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.

  3. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  4. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  5. Performance of DIMTEST-and NOHARM-Based Statistics for Testing Unidimensionality

    ERIC Educational Resources Information Center

    Finch, Holmes; Habing, Brian

    2007-01-01

    This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…

  6. Conducting a Community-Based Experiential-Learning Project to Address Youth Fitness

    ERIC Educational Resources Information Center

    Petersen, Jeffrey C.; Judge, Lawrence; Pierce, David A.

    2012-01-01

    There is a need within health, physical education, recreation, dance, and sport programs to increase community engagement via experiential learning. The Chase Charlie Races are presented in this article as a model pedagogical strategy to engage community youths and families in a training program and running event to help promote fitness. Key…

  7. Cross-National Comparisons of College Students' Attitudes toward Diet/Fitness Apps on Smartphones

    ERIC Educational Resources Information Center

    Cho, Jaehee; Lee, H. Erin; Quinlan, Margaret

    2017-01-01

    Objective: Based on the technology acceptance model (TAM), we explored the nationally-bounded roles of four predictors (subjective norms, entertainment, recordability, and networkability) in determining the TAM variables of perceived usefulness (PU), perceived ease of use (PEOU), and behavioral intention (BI) to use diet/fitness apps on…

  8. HDFITS: Porting the FITS data model to HDF5

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  9. Two solar proton fluence models based on ground level enhancement observations

    NASA Astrophysics Data System (ADS)

    Raukunen, Osku; Vainio, Rami; Tylka, Allan J.; Dietrich, William F.; Jiggens, Piers; Heynderickx, Daniel; Dierckxsens, Mark; Crosby, Norma; Ganse, Urs; Siipola, Robert

    2018-01-01

    Solar energetic particles (SEPs) constitute an important component of the radiation environment in interplanetary space. Accurate modeling of SEP events is crucial for the mitigation of radiation hazards in spacecraft design. In this study we present two new statistical models of high energy solar proton fluences based on ground level enhancement (GLE) observations during solar cycles 19-24. As the basis of our modeling, we utilize a four parameter double power law function (known as the Band function) fits to integral GLE fluence spectra in rigidity. In the first model, the integral and differential fluences for protons with energies between 10 MeV and 1 GeV are calculated using the fits, and the distributions of the fluences at certain energies are modeled with an exponentially cut-off power law function. In the second model, we use a more advanced methodology: by investigating the distributions and relationships of the spectral fit parameters we find that they can be modeled as two independent and two dependent variables. Therefore, instead of modeling the fluences separately at different energies, we can model the shape of the fluence spectrum. We present examples of modeling results and show that the two methodologies agree well except for a short mission duration (1 year) at low confidence level. We also show that there is a reasonable agreement between our models and three well-known solar proton models (JPL, ESP and SEPEM), despite the differences in both the modeling methodologies and the data used to construct the models.

  10. Calibration data Analysis Package (CAP): An IDL based widget application for analysis of X-ray calibration data

    NASA Astrophysics Data System (ADS)

    Vaishali, S.; Narendranath, S.; Sreekumar, P.

    An IDL (interactive data language) based widget application developed for the calibration of C1XS (Narendranath et al., 2010) instrument on Chandrayaan-1 is modified to provide a generic package for the analysis of data from x-ray detectors. The package supports files in ascii as well as FITS format. Data can be fitted with a list of inbuilt functions to derive the spectral redistribution function (SRF). We have incorporated functions such as `HYPERMET' (Philips & Marlow 1976) including non Gaussian components in the SRF such as low energy tail, low energy shelf and escape peak. In addition users can incorporate additional models which may be required to model detector specific features. Spectral fits use a routine `mpfit' which uses Leven-Marquardt least squares fitting method. The SRF derived from this tool can be fed into an accompanying program to generate a redistribution matrix file (RMF) compatible with the X-ray spectral analysis package XSPEC. The tool provides a user friendly interface of help to beginners and also provides transparency and advanced features for experts.

  11. Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data

    NASA Astrophysics Data System (ADS)

    Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.

    2017-12-01

    2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for predictive use, but based on IC's, CON is clearly the superior model for fitting the data. Hence, we demonstrate that Bayesian goodness of fit testing with information criteria helps us rigorously determine the consistency of models with data. Models that demonstrate their consistency to multiple data sets with our approach can then be selected for further refinement.

  12. Discrete stochastic analogs of Erlang epidemic models.

    PubMed

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  13. A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri

    NASA Astrophysics Data System (ADS)

    Alton, K. B.

    2009-12-01

    A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.

  14. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.

    2016-04-01

    Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less

  15. Associations of physical fitness and academic performance among schoolchildren.

    PubMed

    Van Dusen, Duncan P; Kelder, Steven H; Kohl, Harold W; Ranjit, Nalini; Perry, Cheryl L

    2011-12-01

    Public schools provide opportunities for physical activity and fitness surveillance, but are evaluated and funded based on students' academic performance, not their physical fitness. Empirical research evaluating the connections between fitness and academic performance is needed to justify curriculum allocations to physical activity programs. Analyses were based on a convenience sample of 254,743 individually matched standardized academic (TAKS™) and fitness (FITNESSGRAM(®) ) test records of students, grades 3-11, collected by 13 Texas school districts. We categorized fitness results in quintiles by age and gender and used mixed effects regression models to compare the academic performance of the top and bottom fitness groups for each test. All fitness variables except body mass index (BMI) showed significant, positive associations with academic performance after adjustment for socio-demographic covariates, with standardized mean difference effect sizes ranging from .07 to .34. Cardiovascular fitness showed the largest interquintile difference in TAKS score (32-75 points), followed by curl-ups. Additional adjustment for BMI and curl-ups showed dose-response associations between cardiovascular fitness and academic scores (p < .001 for both genders and outcomes). Analysis of BMI demonstrated limited, nonlinear association with academic performance after socio-demographic and fitness adjustments. Fitness was strongly and significantly related to academic performance. Cardiovascular fitness showed a dose-response association with academic performance independent of other socio-demographic and fitness variables. The association appears to peak in late middle to early high school. We recommend that policymakers consider physical education (PE) mandates in middle high school, school administrators consider increasing PE time, and PE practitioners emphasize cardiovascular fitness. © 2011, American School Health Association.

  16. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  17. Protofit: A program for determining surface protonation constants from titration data

    NASA Astrophysics Data System (ADS)

    Turner, Benjamin F.; Fein, Jeremy B.

    2006-11-01

    Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.

  18. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    PubMed

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  19. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  20. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit).

    PubMed

    Yusof, Maryati Mohd; Kuljis, Jasna; Papazafeiropoulou, Anastasia; Stergioulas, Lampros K

    2008-06-01

    The realization of Health Information Systems (HIS) requires rigorous evaluation that addresses technology, human and organization issues. Our review indicates that current evaluation methods evaluate different aspects of HIS and they can be improved upon. A new evaluation framework, human, organization and technology-fit (HOT-fit) was developed after having conducted a critical appraisal of the findings of existing HIS evaluation studies. HOT-fit builds on previous models of IS evaluation--in particular, the IS Success Model and the IT-Organization Fit Model. This paper introduces the new framework for HIS evaluation that incorporates comprehensive dimensions and measures of HIS and provides a technological, human and organizational fit. Literature review on HIS and IS evaluation studies and pilot testing of developed framework. The framework was used to evaluate a Fundus Imaging System (FIS) of a primary care organization in the UK. The case study was conducted through observation, interview and document analysis. The main findings show that having the right user attitude and skills base together with good leadership, IT-friendly environment and good communication can have positive influence on the system adoption. Comprehensive, specific evaluation factors, dimensions and measures in the new framework (HOT-fit) are applicable in HIS evaluation. The use of such a framework is argued to be useful not only for comprehensive evaluation of the particular FIS system under investigation, but potentially also for any Health Information System in general.

  1. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  2. Recognition ROCS Are Curvilinear--Or Are They? On Premature Arguments against the Two-High-Threshold Model of Recognition

    ERIC Educational Resources Information Center

    Broder, Arndt; Schutz, Julia

    2009-01-01

    Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…

  3. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  4. Relative risk for HIV in India - An estimate using conditional auto-regressive models with Bayesian approach.

    PubMed

    Kandhasamy, Chandrasekaran; Ghosh, Kaushik

    2017-02-01

    Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Representing uncertainty in objective functions: extension to include the influence of serial correlation

    NASA Astrophysics Data System (ADS)

    Croke, B. F.

    2008-12-01

    The role of performance indicators is to give an accurate indication of the fit between a model and the system being modelled. As all measurements have an associated uncertainty (determining the significance that should be given to the measurement), performance indicators should take into account uncertainties in the observed quantities being modelled as well as in the model predictions (due to uncertainties in inputs, model parameters and model structure). In the presence of significant uncertainty in observed and modelled output of a system, failure to adequately account for variations in the uncertainties means that the objective function only gives a measure of how well the model fits the observations, not how well the model fits the system being modelled. Since in most cases, the interest lies in fitting the system response, it is vital that the objective function(s) be designed to account for these uncertainties. Most objective functions (e.g. those based on the sum of squared residuals) assume homoscedastic uncertainties. If model contribution to the variations in residuals can be ignored, then transformations (e.g. Box-Cox) can be used to remove (or at least significantly reduce) heteroscedasticity. An alternative which is more generally applicable is to explicitly represent the uncertainties in the observed and modelled values in the objective function. Previous work on this topic addressed the modifications to standard objective functions (Nash-Sutcliffe efficiency, RMSE, chi- squared, coefficient of determination) using the optimal weighted averaging approach. This paper extends this previous work; addressing the issue of serial correlation. A form for an objective function that includes serial correlation will be presented, and the impact on model fit discussed.

  6. Tests of a habitat suitability model for black-capped chickadees

    USGS Publications Warehouse

    Schroeder, Richard L.

    1990-01-01

    The black-capped chickadee (Parus atricapillus) Habitat Suitability Index (HSI) model provides a quantitative rating of the capability of a habitat to support breeding, based on measures related to food and nest site availability. The model assumption that tree canopy volume can be predicted from measures of tree height and canopy closure was tested using data from foliage volume studies conducted in the riparian cottonwood habitat along the South Platte River in Colorado. Least absolute deviations (LAD) regression showed that canopy cover and over story tree height yielded volume predictions significantly lower than volume estimated by more direct methods. Revisions to these model relations resulted in improved predictions of foliage volume. The relation between the HSI and estimates of black-capped chickadee population densities was examined using LAD regression for both the original model and the model with the foliage volume revisions. Residuals from these models were compared to residuals from both a zero slope model and an ideal model. The fit model for the original HSI differed significantly from the ideal model, whereas the fit model for the original HSI did not differ significantly from the ideal model. However, both the fit model for the original HSI and the fit model for the revised HSI did not differ significantly from a model with a zero slope. Although further testing of the revised model is needed, its use is recommended for more realistic estimates of tree canopy volume and habitat suitability.

  7. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  8. Concept and evaluation of food craving: unidimensional scales based on the Trait and the State Food Craving Questionnaire.

    PubMed

    Maranhão, Mara Fernandes; Estella, Nara Mendes; Cogo-Moreira, Hugo; Schmidt, Ulrike; Campbell, Iain C; Claudino, Angélica Medeiros

    2018-01-01

    "Craving" is a motivational state that promotes an intense desire related to consummatory behaviors. Despite growing interest in the concept of food craving, there is a lack of available instruments to assess it in Brazilian Portuguese. The objectives were to translate and adapt the Trait and the State Food Craving Questionnaire (FCQ-T and FCQ-S) to Brazilian Portuguese and to evaluate the psychometric properties of these versions.The FCQ-T and FCQ-S were translated and adapted to Brazilian Portuguese and administered to students at the Federal University of São Paulo. Both questionnaires in their original models were examined considering different estimators (frequentist and bayesian). The goodness of fit underlying the items from both scales was assessed through the following fit indices: χ2, WRMR residual, comparative fit index, Tucker-Lewis index and RMSEA. Data from 314 participants were included in the analyses. Poor fit indices were obtained for both of the original questionnaires regardless of the estimator used and original structural model. Thus, three eating disorder experts reviewed the content of the instruments and selected the items which were considered to assess the core aspects of the craving construct. The new and reduced models (questionnaires) generated good fit indices. Our abbreviated versions of FCQ-S and FCQ-T considerably diverge from the conceptual framework of the original questionnaires. Based on the results of this study, we propose a possible alternative, i.e., to assess craving for food as a unidimensional construct.

  9. Modeling metal binding to soils: the role of natural organic matter.

    PubMed

    Gustafsson, Jon Petter; Pechová, Pavlina; Berggren, Dan

    2003-06-15

    The use of mechanistically based models to simulate the solution concentrations of heavy metals in soils is complicated by the presence of different sorbents that may bind metals. In this study, the binding of Zn, Pb, Cu, and Cd by 14 different Swedish soil samples was investigated. For 10 of the soils, it was found that the Stockholm Humic Model (SHM) was able to describe the acid-base characteristics, when using the concentrations of "active" humic substances and Al as fitting parameters. Two additional soils could be modeled when ion exchange to clay was also considered, using a component additivity approach. For dissolved Zn, Cd, Ca, and Mg reasonable model fits were produced when the metal-humic complexation parameters were identical for the 12 soils modeled. However, poor fits were obtained for Pb and Cu in Aquept B horizons. In two of the soil suspensions, the Lund A and Romfartuna Bhs, the calculated speciation agreed well with results obtained by using cation-exchange membranes. The results suggest that organic matter is an important sorbent for metals in many surface horizons of soils in temperate and boreal climates, and the necessity of properly accounting for the competition from Al in simulations of dissolved metal concentrations is stressed.

  10. A multiplicative process for generating a beta-like survival function with application to the UK 2016 EU referendum results

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Kaufmann, Eric; Levene, Mark; Loizou, George

    Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into social phenomena. Contextual and selection effects tend to produce extreme values in the tails of rank-ordered distributions of both census data and district-level election outcomes. Models that account for this nonlinearity generally outperform linear models. Fitting nonlinear functions based on rank-ordering census and election data therefore improves the fit of aggregate voting models. This may help improve ecological inference, as well as election forecasting in majoritarian systems. We propose a generative multiplicative decrease model that gives rise to a rank-order distribution and facilitates the analysis of the recent UK EU referendum results. We supply empirical evidence that the beta-like survival function, which can be generated directly from our model, is a close fit to the referendum results, and also may have predictive value when covariate data are available.

  11. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  12. Measurement and Modeling of Respiration Rate of Tomato (Cultivar Roma) for Modified Atmosphere Storage.

    PubMed

    Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti

    2015-01-01

    Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.

  13. An astronomer's guide to period searching

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    2003-03-01

    We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.

  14. Analysis of Mining-Induced Subsidence Prediction by Exponent Knothe Model Combined with Insar and Leveling

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Liguo; Tang, Yixian; Zhang, Hong

    2018-04-01

    The principle of exponent Knothe model was introduced in detail and the variation process of mining subsidence with time was analysed based on the formulas of subsidence, subsidence velocity and subsidence acceleration in the paper. Five scenes of radar images and six levelling measurements were collected to extract ground deformation characteristics in one coal mining area in this study. Then the unknown parameters of exponent Knothe model were estimated by combined levelling data with deformation information along the line of sight obtained by InSAR technique. By compared the fitting and prediction results obtained by InSAR and levelling with that obtained only by levelling, it was shown that the accuracy of fitting and prediction combined with InSAR and levelling was obviously better than the other that. Therefore, the InSAR measurements can significantly improve the fitting and prediction accuracy of exponent Knothe model.

  15. Movement rules for individual-based models of stream fish

    Treesearch

    Steven F. Railsback; Roland H. Lamberson; Bret C. Harvey; Walter E. Duffy

    1999-01-01

    Abstract - Spatially explicit individual-based models (IBMs) use movement rules to determine when an animal departs its current location and to determine its movement destination; these rules are therefore critical to accurate simulations. Movement rules typically define some measure of how an individual's expected fitness varies among locations, under the...

  16. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  17. Dimensionality of the 9-item Utrecht Work Engagement Scale revisited: A Bayesian structural equation modeling approach.

    PubMed

    Fong, Ted C T; Ho, Rainbow T H

    2015-01-01

    The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.

  18. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  19. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  20. Do Men and Women Need to Be Screened Differently with Fecal Immunochemical Testing? A Cost-Effectiveness Analysis.

    PubMed

    Meulen, Miriam P van der; Kapidzic, Atija; Leerdam, Monique E van; van der Steen, Alex; Kuipers, Ernst J; Spaander, Manon C W; de Koning, Harry J; Hol, Lieke; Lansdorp-Vogelaar, Iris

    2017-08-01

    Background: Several studies suggest that test characteristics for the fecal immunochemical test (FIT) differ by gender, triggering a debate on whether men and women should be screened differently. We used the microsimulation model MISCAN-Colon to evaluate whether screening stratified by gender is cost-effective. Methods: We estimated gender-specific FIT characteristics based on first-round positivity and detection rates observed in a FIT screening pilot (CORERO-1). Subsequently, we used the model to estimate harms, benefits, and costs of 480 gender-specific FIT screening strategies and compared them with uniform screening. Results: Biennial FIT screening from ages 50 to 75 was less effective in women than men [35.7 vs. 49.0 quality-adjusted life years (QALY) gained, respectively] at higher costs (€42,161 vs. -€5,471, respectively). However, the incremental QALYs gained and costs of annual screening compared with biennial screening were more similar for both genders (8.7 QALYs gained and €26,394 for women vs. 6.7 QALYs gained and €20,863 for men). Considering all evaluated screening strategies, optimal gender-based screening yielded at most 7% more QALYs gained than optimal uniform screening and even resulted in equal costs and QALYs gained from a willingness-to-pay threshold of €1,300. Conclusions: FIT screening is less effective in women, but the incremental cost-effectiveness is similar in men and women. Consequently, screening stratified by gender is not more cost-effective than uniform FIT screening. Impact: Our conclusions support the current policy of uniform FIT screening. Cancer Epidemiol Biomarkers Prev; 26(8); 1328-36. ©2017 AACR . ©2017 American Association for Cancer Research.

  1. Efficient simultaneous reverse Monte Carlo modeling of pair-distribution functions and extended x-ray-absorption fine structure spectra of crystalline disordered materials.

    PubMed

    Németh, Károly; Chapman, Karena W; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J; Heald, Steve M; Newville, Matt; Klingler, Robert J; Winans, Randall E; Almer, Jonathan D; Sandi, Giselle; Srajer, George

    2012-02-21

    An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe(2)O(3). The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials. © 2012 American Institute of Physics

  2. Network approaches for expert decisions in sports.

    PubMed

    Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus

    2012-04-01

    This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Modeling of boldine alkaloid adsorption onto pure and propyl-sulfonic acid-modified mesoporous silicas. A comparative study.

    PubMed

    Geszke-Moritz, Małgorzata; Moritz, Michał

    2016-12-01

    The present study deals with the adsorption of boldine onto pure and propyl-sulfonic acid-functionalized SBA-15, SBA-16 and mesocellular foam (MCF) materials. Siliceous adsorbents were characterized by nitrogen sorption analysis, transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier-transform infrared (FT-IR) spectroscopy and thermogravimetric analysis. The equilibrium adsorption data were analyzed using the Langmuir, Freundlich, Redlich-Peterson, and Temkin isotherms. Moreover, the Dubinin-Radushkevich and Dubinin-Astakhov isotherm models based on the Polanyi adsorption potential were employed. The latter was calculated using two alternative formulas including solubility-normalized (S-model) and empirical C-model. In order to find the best-fit isotherm, both linear regression and nonlinear fitting analysis were carried out. The Dubinin-Astakhov (S-model) isotherm revealed the best fit to the experimental points for adsorption of boldine onto pure mesoporous materials using both linear and nonlinear fitting analysis. Meanwhile, the process of boldine sorption onto modified silicas was described the best by the Langmuir and Temkin isotherms using linear regression and nonlinear fitting analysis, respectively. The values of adsorption energy (below 8kJ/mol) indicate the physical nature of boldine adsorption onto unmodified silicas whereas the ionic interactions seem to be the main force of alkaloid adsorption onto functionalized sorbents (energy of adsorption above 8kJ/mol). Copyright © 2016 Elsevier B.V. All rights reserved.

  4. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  5. z'-BAND GROUND-BASED DETECTION OF THE SECONDARY ECLIPSE OF WASP-19b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burton, J. R.; Watson, C. A.; Pollacco, D.

    2012-08-01

    We present the ground-based detection of the secondary eclipse of the transiting exoplanet WASP-19b. The observations were made in the Sloan z' band using the ULTRACAM triple-beam CCD camera mounted on the New Technology Telescope. The measurement shows a 0.088% {+-} 0.019% eclipse depth, matching previous predictions based on H- and K-band measurements. We discuss in detail our approach to the removal of errors arising due to systematics in the data set, in addition to fitting a model transit to our data. This fit returns an eclipse center, T{sub 0}, of 2455578.7676 HJD, consistent with a circular orbit. Our measurementmore » of the secondary eclipse depth is also compared to model atmospheres of WASP-19b and is found to be consistent with previous measurements at longer wavelengths for the model atmospheres we investigated.« less

  6. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  7. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach.

    PubMed

    AlMenhali, Entesar Ali; Khalid, Khalizani; Iyanna, Shilpa

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency.

  8. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach

    PubMed Central

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency. PMID:29758021

  9. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples.

    PubMed

    Abma, Femke I; Bültmann, Ute; Amick Iii, Benjamin C; Arends, Iris; Dorland, Heleen F; Flach, Peter A; van der Klink, Jac J L; van de Ven, Hardy A; Bjørner, Jakob Bue

    2017-09-09

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons' health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with mixed clinical conditions and job types to evaluate the comparability of the scale structure. Methods Confirmatory factor and multi-group analyses were conducted in six cross-sectional working samples (total N = 2433) to evaluate and compare a five-factor model structure of the WRFQ (work scheduling demands, output demands, physical demands, mental and social demands, and flexibility demands). Model fit indices were calculated based on RMSEA ≤ 0.08 and CFI ≥ 0.95. After fitting the five-factor model, the multidimensional structure of the instrument was evaluated across samples using a second order factor model. Results The factor structure was robust across samples and a multi-group model had adequate fit (RMSEA = 0.63, CFI = 0.972). In sample specific analyses, minor modifications were necessary in three samples (final RMSEA 0.055-0.080, final CFI between 0.955 and 0.989). Applying the previous first order specifications, a second order factor model had adequate fit in all samples. Conclusion A five-factor model of the WRFQ showed consistent structural validity across samples. A second order factor model showed adequate fit, but the second order factor loadings varied across samples. Therefore subscale scores are recommended to compare across different clinical and working samples.

  10. The ROC Toolbox: A toolbox for analyzing receiver-operating characteristics derived from confidence ratings.

    PubMed

    Koen, Joshua D; Barrett, Frederick S; Harlow, Iain M; Yonelinas, Andrew P

    2017-08-01

    Signal-detection theory, and the analysis of receiver-operating characteristics (ROCs), has played a critical role in the development of theories of episodic memory and perception. The purpose of the current paper is to present the ROC Toolbox. This toolbox is a set of functions written in the Matlab programming language that can be used to fit various common signal detection models to ROC data obtained from confidence rating experiments. The goals for developing the ROC Toolbox were to create a tool (1) that is easy to use and easy for researchers to implement with their own data, (2) that can flexibly define models based on varying study parameters, such as the number of response options (e.g., confidence ratings) and experimental conditions, and (3) that provides optimal routines (e.g., Maximum Likelihood estimation) to obtain parameter estimates and numerous goodness-of-fit measures.The ROC toolbox allows for various different confidence scales and currently includes the models commonly used in recognition memory and perception: (1) the unequal variance signal detection (UVSD) model, (2) the dual process signal detection (DPSD) model, and (3) the mixture signal detection (MSD) model. For each model fit to a given data set the ROC toolbox plots summary information about the best fitting model parameters and various goodness-of-fit measures. Here, we present an overview of the ROC Toolbox, illustrate how it can be used to input and analyse real data, and finish with a brief discussion on features that can be added to the toolbox.

  11. Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Ryu, Ehri; West, Stephen G.

    2009-01-01

    In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…

  12. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  13. Lunar gravitational field estimation and the effects of mismodeling upon lunar satellite orbit prediction. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Davis, John H.

    1993-01-01

    Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.

  14. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  15. Extracting Fitness Relationships and Oncogenic Patterns among Driver Genes in Cancer.

    PubMed

    Zhang, Xindong; Gao, Lin; Jia, Songwei

    2017-12-25

    Driver mutation provides fitness advantage to cancer cells, the accumulation of which increases the fitness of cancer cells and accelerates cancer progression. This work seeks to extract patterns accumulated by driver genes ("fitness relationships") in tumorigenesis. We introduce a network-based method for extracting the fitness relationships of driver genes by modeling the network properties of the "fitness" of cancer cells. Colon adenocarcinoma (COAD) and skin cutaneous malignant melanoma (SKCM) are employed as case studies. Consistent results derived from different background networks suggest the reliability of the identified fitness relationships. Additionally co-occurrence analysis and pathway analysis reveal the functional significance of the fitness relationships with signaling transduction. In addition, a subset of driver genes called the "fitness core" is recognized for each case. Further analyses indicate the functional importance of the fitness core in carcinogenesis, and provide potential therapeutic opportunities in medicinal intervention. Fitness relationships characterize the functional continuity among driver genes in carcinogenesis, and suggest new insights in understanding the oncogenic mechanisms of cancers, as well as providing guiding information for medicinal intervention.

  16. Research on a Community-based Platform for Promoting Health and Physical Fitness in the Elderly Community

    PubMed Central

    Tsai, Tsai-Hsuan; Wong, Alice May-Kuen; Hsu, Chien-Lung; Tseng, Kevin C.

    2013-01-01

    This study aims to assess the acceptability of a fitness testing platform (iFit) for installation in an assisted living community with the aim of promoting fitness and slowing the onset of frailty. The iFit platform develops a means of testing Bureau of Health Promotion mandated health assessment items for the elderly (including flexibility tests, grip strength tests, balance tests, and reaction time tests) and integrates wireless remote sensors in a game-like environment to capture and store subject response data, thus providing individuals in elderly care contexts with a greater awareness of their own physical condition. In this study, we specifically evaluated the users’ intention of using the iFit using a technology acceptance model (TAM). A total of 101 elderly subjects (27 males and 74 females) were recruited. A survey was conducted to measure technology acceptance, to verify that the platform could be used as intended to promote fitness among the elderly. Results indicate that perceived usefulness, perceived ease of use and usage attitude positively impact behavioral intention to use the platform. The iFit platform can offer user-friendly solutions for a community-based fitness care and monitoring of elderly subjects. In summary, iFit was determined by three key drivers and discussed as follows: risk factors among the frail elderly, mechanism for slowing the advance frailty, and technology acceptance and support for promoting physical fitness. PMID:23460859

  17. Cost-effectiveness of the faecal immunochemical test at a range of positivity thresholds compared with the guaiac faecal occult blood test in the NHS Bowel Cancer Screening Programme in England

    PubMed Central

    Halloran, Stephen

    2017-01-01

    Objectives Through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP), men and women in England aged between 60 and 74 years are invited for colorectal cancer (CRC) screening every 2 years using the guaiac faecal occult blood test (gFOBT). The aim of this analysis was to estimate the cost–utility of the faecal immunochemical test for haemoglobin (FIT) compared with gFOBT for a cohort beginning screening aged 60 years at a range of FIT positivity thresholds. Design We constructed a cohort-based Markov state transition model of CRC disease progression and screening. Screening uptake, detection, adverse event, mortality and cost data were taken from BCSP data and national sources, including a recent large pilot study of FIT screening in the BCSP. Results Our results suggest that FIT is cost-effective compared with gFOBT at all thresholds, resulting in cost savings and quality-adjusted life years (QALYs) gained over a lifetime time horizon. FIT was cost-saving (p<0.001) and resulted in QALY gains of 0.014 (95% CI 0.012 to 0.017) at the base case threshold of 180 µg Hb/g faeces. Greater health gains and cost savings were achieved as the FIT threshold was decreased due to savings in cancer management costs. However, at lower thresholds, FIT was also associated with more colonoscopies (increasing from 32 additional colonoscopies per 1000 people invited for screening for FIT 180 µg Hb/g faeces to 421 additional colonoscopies per 1000 people invited for screening for FIT 20 µg Hb/g faeces over a 40-year time horizon). Parameter uncertainty had limited impact on the conclusions. Conclusions This is the first published economic analysis of FIT screening in England using data directly comparing FIT with gFOBT in the NHS BSCP. These results for a cohort starting screening aged 60 years suggest that FIT is highly cost-effective at all thresholds considered. Further modelling is needed to estimate economic outcomes for screening across all age cohorts simultaneously. PMID:29079605

  18. Equal Area Logistic Estimation for Item Response Theory

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li

    2009-08-01

    Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.

  19. Evaluation of marginal/internal fit of chrome-cobalt crowns: Direct laser metal sintering versus computer-aided design and computer-aided manufacturing.

    PubMed

    Gunsoy, S; Ulusoy, M

    2016-01-01

    The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.

  20. The association between anthropometric measures and lung function in a population-based study of Canadian adults.

    PubMed

    Rowe, A; Hernandez, P; Kuhle, S; Kirkland, S

    2017-10-01

    Decreased lung function has health impacts beyond diagnosable lung disease. It is therefore important to understand the factors that may influence even small changes in lung function including obesity, physical fitness and physical activity. The aim of this study was to determine the anthropometric measure most useful in examining the association with lung function and to determine how physical activity and physical fitness influence this association. The current study used cross-sectional data on 4662 adults aged 40-79 years from the Canadian Health Measures Survey Cycles 1 and 2. Linear regression models were used to examine the association between the anthropometric and lung function measures (forced expiratory volume in 1 s [FEV 1 ] and forced vital capacity [FVC]); R 2 values were compared among models. Physical fitness and physical activity terms were added to the models and potential confounding was assessed. Models using sum of 5 skinfolds and waist circumference consistently had the highest R 2 values for FEV 1 and FVC, while models using body mass index consistently had among the lowest R 2 values for FEV 1 and FVC and for men and women. Physical activity and physical fitness were confounders of the relationships between waist circumference and the lung function measures. Waist circumference remained a significant predictor of FVC but not FEV 1 after adjustment for physical activity or physical fitness. Waist circumference is an important predictor of lung function. Physical activity and physical fitness should be considered as potential confounders of the relationship between anthropometric measures and lung function. Copyright © 2017. Published by Elsevier Ltd.

  1. Hot Dust in Panchromatic SED Fitting: Identification of Active Galactic Nuclei and Improved Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie; van Dokkum, Pieter

    2018-02-01

    Forward modeling of the full galaxy SED is a powerful technique, providing self-consistent constraints on stellar ages, dust properties, and metallicities. However, the accuracy of these results is contingent on the accuracy of the model. One significant source of uncertainty is the contribution of obscured AGN, as they are relatively common and can produce substantial mid-IR (MIR) emission. Here we include emission from dusty AGN torii in the Prospector SED-fitting framework, and fit the UV–IR broadband photometry of 129 nearby galaxies. We find that 10% of the fitted galaxies host an AGN contributing >10% of the observed galaxy MIR luminosity. We demonstrate the necessity of this AGN component in the following ways. First, we compare observed spectral features to spectral features predicted from our model fit to the photometry. We find that the AGN component greatly improves predictions for observed Hα and Hβ luminosities, as well as mid-infrared Akari and Spitzer/IRS spectra. Second, we show that inclusion of the AGN component changes stellar ages and SFRs by up to a factor of 10, and dust attenuations by up to a factor of 2.5. Finally, we show that the strength of our model AGN component correlates with independent AGN indicators, suggesting that these galaxies truly host AGN. Notably, only 46% of the SED-detected AGN would be detected with a simple MIR color selection. Based on these results, we conclude that SED models which fit MIR data without AGN components are vulnerable to substantial bias in their derived parameters.

  2. Local air temperature tolerance: a sensible basis for estimating climate variability

    NASA Astrophysics Data System (ADS)

    Kärner, Olavi; Post, Piia

    2016-11-01

    The customary representation of climate using sample moments is generally biased due to the noticeably nonstationary behaviour of many climate series. In this study, we introduce a moment-free climate representation based on a statistical model fitted to a long-term daily air temperature anomaly series. This model allows us to separate the climate and weather scale variability in the series. As a result, the climate scale can be characterized using the mean annual cycle of series and local air temperature tolerance, where the latter is computed using the fitted model. The representation of weather scale variability is specified using the frequency and the range of outliers based on the tolerance. The scheme is illustrated using five long-term air temperature records observed by different European meteorological stations.

  3. Paper-cutting operations using scissors in Drury's law tasks.

    PubMed

    Yamanaka, Shota; Miyashita, Homei

    2018-05-01

    Human performance modeling is a core topic in ergonomics. In addition to deriving models, it is important to verify the kinds of tasks that can be modeled. Drury's law is promising for path tracking tasks such as navigating a path with pens or driving a car. We conducted an experiment based on the observation that paper-cutting tasks using scissors resemble such tasks. The results showed that cutting arc-like paths (1/4 of a circle) showed an excellent fit with Drury's law (R 2  > 0.98), whereas cutting linear paths showed a worse fit (R 2  > 0.87). Since linear paths yielded better fits when path amplitudes were divided (R 2  > 0.99 for all amplitudes), we discuss the characteristics of paper-cutting operations using scissors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  5. State-based versus reward-based motivation in younger and older adults.

    PubMed

    Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd

    2014-12-01

    Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.

  6. X-ray spectroscopy of the super soft source RXJ0925.7-475

    NASA Technical Reports Server (NTRS)

    Ebisawa, Ken; Asai, Kazumi; Dotani, Tadayasu; Mukai, Koji; Smale, Alan

    1996-01-01

    The super soft source (SSS) RXJ 0925.7-475 was observed with the Advanced Satellite for Cosmology and Astrophysics (ASCA) solid state spectrometer and its energy spectrum was analyzed. A simple black body model does not fit the data, and several absorption edges of ionized heavy elements are required. Without the addition of absorption edges, the best-fit black body radius and the estimated bolometric luminosity are 6800 (d/1 kpc) km and 1.2 x 10(exp 37) (d/1 kps)(exp 2) erg/s, respectively. The introduction of absorption edges significantly reduces the best-fit radius and luminosity to 140 (d/1 KPS) km and 6 x 10(exp 34) (d/1 kpc)(exp 2) erg/s, respectively. This suggests that the estimation of the emission region size and luminosity of SSS based on the black body model fit to the observed data is not reliable.

  7. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    NASA Technical Reports Server (NTRS)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  8. How do physicians become medical experts? A test of three competing theories: distinct domains, independent influence and encapsulation models.

    PubMed

    Violato, Claudio; Gao, Hong; O'Brien, Mary Claire; Grier, David; Shen, E

    2018-05-01

    The distinction between basic sciences and clinical knowledge which has led to a theoretical debate on how medical expertise is developed has implications for medical school and lifelong medical education. This longitudinal, population based observational study was conducted to test the fit of three theories-knowledge encapsulation, independent influence, distinct domains-of the development of medical expertise employing structural equation modelling. Data were collected from 548 physicians (292 men-53.3%; 256 women-46.7%; mean age = 24.2 years on admission) who had graduated from medical school 2009-2014. They included (1) Admissions data of undergraduate grade point average and Medical College Admission Test sub-test scores, (2) Course performance data from years 1, 2, and 3 of medical school, and (3) Performance on the NBME exams (i.e., Step 1, Step 2 CK, and Step 3). Statistical fit indices (Goodness of Fit Index-GFI; standardized root mean squared residual-SRMR; root mean squared error of approximation-RSMEA) and comparative fit [Formula: see text] of three theories of cognitive development of medical expertise were used to assess model fit. There is support for the knowledge encapsulation three factor model of clinical competency (GFI = 0.973, SRMR = 0.043, RSMEA = 0.063) which had superior fit indices to both the independent influence and distinct domains theories ([Formula: see text] vs [Formula: see text] [[Formula: see text

  9. The critical role of uncertainty in projections of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Meresa, Hadush K.; Romanowicz, Renata J.

    2017-08-01

    This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.

  10. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  11. Reconciling the local void with the CMB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadathur, Seshadri; Sarkar, Subir

    2011-03-15

    In the standard cosmological model, the dimming of distant Type Ia supernovae is explained by invoking the existence of repulsive ''dark energy'' which is causing the Hubble expansion to accelerate. However, this may be an artifact of interpreting the data in an (oversimplified) homogeneous model universe. In the simplest inhomogeneous model which fits the SNe Ia Hubble diagram without dark energy, we are located close to the center of a void modeled by a Lemaitre-Tolman-Bondi metric. It has been claimed that such models cannot fit the cosmic microwave background (CMB) and other cosmological data. This is, however, based on themore » assumption of a scale-free spectrum for the primordial density perturbation. An alternative physically motivated form for the spectrum enables a good fit to both SNe Ia (Constitution/Union2) and CMB (WMAP 7-yr) data, and to the locally measured Hubble parameter. Constraints from baryon acoustic oscillations and primordial nucleosynthesis are also satisfied.« less

  12. Attention-deficit/hyperactivity disorder dimensionality: the reliable 'g' and the elusive 's' dimensions.

    PubMed

    Wagner, Flávia; Martel, Michelle M; Cogo-Moreira, Hugo; Maia, Carlos Renato Moreira; Pan, Pedro Mario; Rohde, Luis Augusto; Salum, Giovanni Abrahão

    2016-01-01

    The best structural model for attention-deficit/hyperactivity disorder (ADHD) symptoms remains a matter of debate. The objective of this study is to test the fit and factor reliability of competing models of the dimensional structure of ADHD symptoms in a sample of randomly selected and high-risk children and pre-adolescents from Brazil. Our sample comprised 2512 children aged 6-12 years from 57 schools in Brazil. The ADHD symptoms were assessed using parent report on the development and well-being assessment (DAWBA). Fit indexes from confirmatory factor analysis were used to test unidimensional, correlated, and bifactor models of ADHD, the latter including "g" ADHD and "s" symptom domain factors. Reliability of all models was measured with omega coefficients. A bifactor model with one general factor and three specific factors (inattention, hyperactivity, impulsivity) exhibited the best fit to the data, according to fit indices, as well as the most consistent factor loadings. However, based on omega reliability statistics, the specific inattention, hyperactivity, and impulsivity dimensions provided very little reliable information after accounting for the reliable general ADHD factor. Our study presents some psychometric evidence that ADHD specific ("s") factors might be unreliable after taking common ("g" factor) variance into account. These results are in accordance with the lack of longitudinal stability among subtypes, the absence of dimension-specific molecular genetic findings and non-specific effects of treatment strategies. Therefore, researchers and clinicians might most effectively rely on the "g" ADHD to characterize ADHD dimensional phenotype, based on currently available symptom items.

  13. Comparison of thawing and freezing dark energy parametrizations

    NASA Astrophysics Data System (ADS)

    Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.

    2016-05-01

    Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.

  14. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project.

    PubMed

    Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif

    2017-01-01

    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  15. Comparison of cadmium hydroxide nanowires and silver nanoparticles loaded on activated carbon as new adsorbents for efficient removal of Sunset yellow: Kinetics and equilibrium study.

    PubMed

    Ghaedi, Mehrorang

    2012-08-01

    Adsorption of Sunset yellow (SY) onto cadmium hydroxide nanowires loaded on activated carbon (Cd(OH)(2)-NW-AC) and silver nanoparticles loaded on activated carbon (Ag-NP-AC) was investigated. The effects of pH, contact time, amount of adsorbents, initial dye concentration, agitation speed and temperature on Sunset yellow removal on both adsorbents were studied. Following the optimization of variables, the experimental data were fitted to different conventional isotherm models like Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich (D-R) based on linear regression coefficient R(2) the Langmuir isotherm was found to be the best fitting isotherm model and the maximum monolayer adsorption capacities calculated based on this model for Cd(OH)(2)-NW-AC and Ag-NP-AC were found to be 76.9 and 37.03mg g(-1) at room temperatures, respectively. The experimental fitting of time dependency of adsorption of SY onto both adsorbent shows the applicability of second order kinetic model for interpretation of kinetic data. The pseudo-second order model best fits the adsorption kinetics. Thermodynamic parameters such as enthalpy, entropy, activation energy, sticking probability, and Gibb's free energy changes were also calculated. It was found that the sorption of SY over (Cd(OH)(2)-NW-AC) and (Ag-NP-AC) was spontaneous and endothermic in nature. Efficiency of the adsorbent was also investigated using real effluents and more than 95% SY removal for both adsorbents was observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Longitudinal factorial invariance of the PedsQL 4.0 Generic Core Scales child self-report Version: one year prospective evidence from the California State Children's Health Insurance Program (SCHIP).

    PubMed

    Varni, James W; Limbers, Christine A; Newman, Daniel A; Seid, Michael

    2008-11-01

    The measurement of health-related quality of life (HRQOL) in pediatric medicine and health services research has grown significantly over the past decade. The paradigm shift toward patient-reported outcomes (PROs) has provided the opportunity to emphasize the value and critical need for pediatric patient self-report. In order for changes in HRQOL/PRO outcomes to be meaningful over time, it is essential to demonstrate longitudinal factorial invariance. This study examined the longitudinal factor structure of the PedsQL 4.0 Generic Core Scales over a one-year period for child self-report ages 5-17 in 2,887 children from a statewide evaluation of the California State Children's Health Insurance Program (SCHIP) utilizing a structural equation modeling framework. Specifying four- and five-factor measurement models, longitudinal structural equation modeling was used to compare factor structures over a one-year interval on the PedsQL 4.0 Generic Core Scales. While the four-factor conceptually-derived measurement model for the PedsQL 4.0 Generic Core Scales produced an acceptable fit, the five-factor empirically-derived measurement model from the initial field test of the PedsQL 4.0 Generic Core Scales produced a marginally superior fit in comparison to the four-factor model. For the five-factor measurement model, the best fitting model, strict factorial invariance of the PedsQL 4.0 Generic Core Scales across the two measurement occasions was supported by the stability of the comparative fit index between the unconstrained and constrained models, and several additional indices of practical fit including the root mean squared error of approximation, the non-normed fit index, and the parsimony normed fit index. The findings support an equivalent factor structure on the PedsQL 4.0 Generic Core Scales over time. Based on these data, it can be concluded that over a one-year period children in our study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner.

  17. Simulation and fitting of complex reaction network TPR: The key is the objective function

    DOE PAGES

    Savara, Aditya Ashi

    2016-07-07

    In this research, a method has been developed for finding improved fits during simulation and fitting of data from complex reaction network temperature programmed reactions (CRN-TPR). It was found that simulation and fitting of CRN-TPR presents additional challenges relative to simulation and fitting of simpler TPR systems. The method used here can enable checking the plausibility of proposed chemical mechanisms and kinetic models. The most important finding was that when choosing an objective function, use of an objective function that is based on integrated production provides more utility in finding improved fits when compared to an objective function based onmore » the rate of production. The response surface produced by using the integrated production is monotonic, suppresses effects from experimental noise, requires fewer points to capture the response behavior, and can be simulated numerically with smaller errors. For CRN-TPR, there is increased importance (relative to simple reaction network TPR) in resolving of peaks prior to fitting, as well as from weighting of experimental data points. Using an implicit ordinary differential equation solver was found to be inadequate for simulating CRN-TPR. Lastly, the method employed here was capable of attaining improved fits in simulation and fitting of CRN-TPR when starting with a postulated mechanism and physically realistic initial guesses for the kinetic parameters.« less

  18. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  19. Stability of INFIT and OUTFIT Compared to Simulated Estimates in Applied Setting.

    PubMed

    Hodge, Kari J; Morgan, Grant B

    Residual-based fit statistics are commonly used as an indication of the extent to which the item response data fit the Rash model. Fit statistic estimates are influenced by sample size and rules-of thumb estimates may result in incorrect conclusions about the extent to which the model fits the data. Estimates obtained in this analysis were compared to 250 simulated data sets to examine the stability of the estimates. All INFIT estimates were within the rule-of-thumb range of 0.7 to 1.3. However, only 82% of the INFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's INFIT distributions using this 95% confidence-like interval. This is a 18 percentage point difference in items that were classified as acceptable. Fourty-eight percent of OUTFIT estimates fell within the 0.7 to 1.3 rule- of-thumb range. Whereas 34% of OUTFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's OUTFIT distributions. This is a 13 percentage point difference in items that were classified as acceptable. When using the rule-of- thumb ranges for fit estimates the magnitude of misfit was smaller than with the 95% confidence interval of the simulated distribution. The findings indicate that the use of confidence intervals as critical values for fit statistics leads to different model data fit conclusions than traditional rule of thumb critical values.

  20. MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets

    NASA Astrophysics Data System (ADS)

    Drouart, G.; Falkendal, T.

    2018-07-01

    We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  1. Physical fitness predicts technical-tactical and time-motion profile in simulated Judo and Brazilian Jiu-Jitsu matches.

    PubMed

    Coswig, Victor S; Gentil, Paulo; Bueno, João C A; Follmer, Bruno; Marques, Vitor A; Del Vecchio, Fabrício B

    2018-01-01

    Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. The sample consisted of Judo ( n  = 16) and BJJ ( n  = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights.

  2. Closed-loop model identification of cooperative manipulators holding deformable objects

    NASA Astrophysics Data System (ADS)

    Alkathiri, A. A.; Akmeliawati, R.; Azlan, N. Z.

    2017-11-01

    This paper presents system identification to obtain the closed-loop models of a couple of cooperative manipulators in a system, which function to hold deformable objects. The system works using the master-slave principle. In other words, one of the manipulators is position-controlled through encoder feedback, while a force sensor gives feedback to the other force-controlled manipulator. Using the closed-loop input and output data, the closed-loop models, which are useful for model-based control design, are estimated. The criteria for model validation are a 95% fit between the measured and simulated output of the estimated models and residual analysis. The results show that for both position and force control respectively, the fits are 95.73% and 95.88%.

  3. Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya

    2014-09-01

    Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.

  4. The Political Economy of Interlibrary Organizations: Two Case Studies.

    ERIC Educational Resources Information Center

    Townley, Charles T.

    J. Kenneth Benson's political economy model for interlibrary cooperation identifies linkages and describes interactions between the environment, the interlibrary organization, and member libraries. A tentative general model for interlibrary organizations based on the Benson model was developed, and the fit of this adjusted model to the realities…

  5. Improved Solar-Radiation-Pressure Models for GPS Satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Yoaz; Kuang, Da

    2006-01-01

    A report describes a series of computational models conceived as an improvement over prior models for determining effects of solar-radiation pressure on orbits of Global Positioning System (GPS) satellites. These models are based on fitting coefficients of Fourier functions of Sun-spacecraft- Earth angles to observed spacecraft orbital motions.

  6. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    USDA-ARS?s Scientific Manuscript database

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  7. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  8. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    NASA Astrophysics Data System (ADS)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  9. Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks

    PubMed Central

    Richter, Philipp; Toledano-Ayala, Manuel

    2015-01-01

    Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996

  10. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    PubMed

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  11. Thermodynamic assessment of Ag–Cu–In

    DOE PAGES

    Muzzillo, Christopher P.; Anderson, Tim

    2018-01-16

    The Ag-Cu-In thermodynamic material system is of interest for brazing alloys and chalcopyrite thin-film photovoltaics. To advance these applications, Ag-Cu-In was assessed and a Calphad model was developed. Binary Ag-Cu and Cu-In parameters were taken from previous assessments, while Ag-In was re-assessed. Structure-based models were employed for ..beta..-bcc(A2)-Ag 3In, ..gamma..-Ag 9In 4, and AgIn 2 to obtain good fit to enthalpy, phase boundary, and invariant reaction data for Ag-In. Ternary Ag-Cu-In parameters were optimized to achieve excellent fit to activity, enthalpy, and extensive phase equilibrium data. Relative to the previous Ag-Cu-In assessment, fit was improved while fewer parameters were used.

  12. Thermodynamic assessment of Ag–Cu–In

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muzzillo, Christopher P.; Anderson, Tim

    The Ag-Cu-In thermodynamic material system is of interest for brazing alloys and chalcopyrite thin-film photovoltaics. To advance these applications, Ag-Cu-In was assessed and a Calphad model was developed. Binary Ag-Cu and Cu-In parameters were taken from previous assessments, while Ag-In was re-assessed. Structure-based models were employed for ..beta..-bcc(A2)-Ag 3In, ..gamma..-Ag 9In 4, and AgIn 2 to obtain good fit to enthalpy, phase boundary, and invariant reaction data for Ag-In. Ternary Ag-Cu-In parameters were optimized to achieve excellent fit to activity, enthalpy, and extensive phase equilibrium data. Relative to the previous Ag-Cu-In assessment, fit was improved while fewer parameters were used.

  13. Assessment of a Business-to-Consumer (B2C) model for Telemonitoring patients with Chronic Heart Failure (CHF).

    PubMed

    Grustam, Andrija S; Vrijhoef, Hubertus J M; Koymans, Ron; Hukal, Philipp; Severens, Johan L

    2017-10-11

    The purpose of this study is to assess the Business-to-Consumer (B2C) model for telemonitoring patients with Chronic Heart Failure (CHF) by analysing the value it creates, both for organizations or ventures that provide telemonitoring services based on it, and for society. The business model assessment was based on the following categories: caveats, venture type, six-factor alignment, strategic market assessment, financial viability, valuation analysis, sustainability, societal impact, and technology assessment. The venture valuation was performed for three jurisdictions (countries) - Singapore, the Netherlands and the United States - in order to show the opportunities in a small, medium-sized, and large country (i.e. population). The business model assessment revealed that B2C telemonitoring is viable and profitable in the Innovating in Healthcare Framework. Analysis of the ecosystem revealed an average-to-excellent fit with the six factors. The structure and financing fit was average, public policy and technology alignment was good, while consumer alignment and accountability fit was deemed excellent. The financial prognosis revealed that the venture is viable and profitable in Singapore and the Netherlands but not in the United States due to relatively high salary inputs. The B2C model in telemonitoring CHF potentially creates value for patients, shareholders of the service provider, and society. However, the validity of the results could be improved, for instance by using a peer-reviewed framework, a systematic literature search, case-based cost/efficiency inputs, and varied scenario inputs.

  14. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  15. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    NASA Astrophysics Data System (ADS)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  16. Examining the "WorkFORCE"™ Assessment for Job Fit and Core Capabilities of "FACETS"™. Research Report. ETS RR-14-32

    ERIC Educational Resources Information Center

    Naemi, Bobby; Seybert, Jacob; Robbins, Steven; Kyllonen, Patrick

    2014-01-01

    This report introduces the "WorkFORCE"™ Assessment for Job Fit, a personality assessment utilizing the "FACETS"™ core capability, which is based on innovations in forced-choice assessment and computer adaptive testing. The instrument is derived from the fivefactor model (FFM) of personality and encompasses a broad spectrum of…

  17. Fuel consumption modeling in support of ATM environmental decision-making

    DOT National Transportation Integrated Search

    2009-07-01

    The FAA has recently updated the airport terminal : area fuel consumption methods used in its environmental models. : These methods are based on fitting manufacturers fuel : consumption data to empirical equations. The new fuel : consumption metho...

  18. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  19. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  20. ElEvoHI: A Novel CME Prediction Tool for Heliospheric Imaging Combining an Elliptical Front with Drag-based Model Fitting

    NASA Astrophysics Data System (ADS)

    Rollett, T.; Möstl, C.; Isavnin, A.; Davies, J. A.; Kubicka, M.; Amerstorfer, U. V.; Harrison, R. A.

    2016-06-01

    In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach enables the adoption of a highly flexible geometrical shape for the CME front with an adjustable CME angular width and an adjustable radius of curvature of its leading edge, I.e., the assumed geometry is elliptical. Using, as input, Solar TErrestrial RElations Observatory (STEREO) heliospheric imager (HI) observations, a new elliptic conversion (ElCon) method is introduced and combined with the use of drag-based model (DBM) fitting to quantify the deceleration or acceleration experienced by CMEs during propagation. The result is then used as input for the Ellipse Evolution Model (ElEvo). Together, ElCon, DBM fitting, and ElEvo form the novel ElEvoHI forecasting utility. To demonstrate the applicability of ElEvoHI, we forecast the arrival times and speeds of 21 CMEs remotely observed from STEREO/HI and compare them to in situ arrival times and speeds at 1 AU. Compared to the commonly used STEREO/HI fitting techniques (Fixed-ϕ, Harmonic Mean, and Self-similar Expansion fitting), ElEvoHI improves the arrival time forecast by about 2 to ±6.5 hr and the arrival speed forecast by ≈ 250 to ±53 km s-1, depending on the ellipse aspect ratio assumed. In particular, the remarkable improvement of the arrival speed prediction is potentially beneficial for predicting geomagnetic storm strength at Earth.

  1. New microscale constitutive model of human trabecular bone based on depth sensing indentation technique.

    PubMed

    Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty

    2018-05-30

    A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Driving-forces model on individual behavior in scenarios considering moving threat agents

    NASA Astrophysics Data System (ADS)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  3. Increasing students' physical activity during school physical education: rationale and protocol for the SELF-FIT cluster randomized controlled trial.

    PubMed

    Ha, Amy S; Lonsdale, Chris; Lubans, David R; Ng, Johan Y Y

    2017-07-11

    The Self-determined Exercise and Learning For FITness (SELF-FIT) is a multi-component school-based intervention based on tenets of self-determination theory. SELF-FIT aims to increase students' moderate-to-vigorous physical activity (MVPA) during physical education lessons, and enhance their autonomous motivation towards fitness activities. Using a cluster randomized controlled trial, we aim to examine the effects of the intervention on students' MVPA during school physical education. Secondary 2 students (approximately aged 14 years) from 26 classes in 26 different schools will be recruited. After baseline assessments, students will be randomized into either the experimental group or wait-list control group using a matched-pair randomization. Teachers allocated to the experimental group will attend two half-day workshops and deliver the SELF-FIT intervention for 8 weeks. The main intervention components include training teachers to teach in more need supportive ways, and conducting fitness exercises using a fitness dice with interchangeable faces. Other motivational components, such as playing music during classes, are also included. The primary outcome of the trial is students' MVPA during PE lessons. Secondary outcomes include students' leisure-time MVPA, perceived need support from teachers, need satisfaction, autonomous motivation towards physical education, intention to engage in physical activity, psychological well-being, and health-related fitness (cardiorespiratory and muscular fitness). Quantitative data will be analyzed using multilevel modeling approaches. Focus group interviews will also be conducted to assess students' perceptions of the intervention. The SELF-FIT intervention has been designed to improve students' health and well-being by using high-intensity activities in classes delivered by teachers who have been trained to be autonomy needs supportive. If successful, scalable interventions based on SELF-FIT could be applied in physical education at large. The trial is registered at the Australia New Zealand Clinical Trial Registry (Trial ID: ACTRN12615000633583 ; date of registration: 18 June 2015).

  4. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models

    PubMed Central

    Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.

    2017-01-01

    Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161

  5. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    PubMed

    Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos

    2017-01-01

    The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  6. Cost-effectiveness of population-based screening for colorectal cancer: a comparison of guaiac-based faecal occult blood testing, faecal immunochemical testing and flexible sigmoidoscopy

    PubMed Central

    Sharp, L; Tilson, L; Whyte, S; O'Ceilleachair, A; Walsh, C; Usher, C; Tappenden, P; Chilcott, J; Staines, A; Barry, M; Comber, H

    2012-01-01

    Background: Several colorectal cancer-screening tests are available, but it is uncertain which provides the best balance of risks and benefits within a screening programme. We evaluated cost-effectiveness of a population-based screening programme in Ireland based on (i) biennial guaiac-based faecal occult blood testing (gFOBT) at ages 55–74, with reflex faecal immunochemical testing (FIT); (ii) biennial FIT at ages 55–74; and (iii) once-only flexible sigmoidoscopy (FSIG) at age 60. Methods: A state-transition model was used to estimate costs and outcomes for each screening scenario vs no screening. A third party payer perspective was adopted. Probabilistic sensitivity analyses were undertaken. Results: All scenarios would be considered highly cost-effective compared with no screening. The lowest incremental cost-effectiveness ratio (ICER vs no screening €589 per quality-adjusted life-year (QALY) gained) was found for FSIG, followed by FIT (€1696) and gFOBT (€4428); gFOBT was dominated. Compared with FSIG, FIT was associated with greater gains in QALYs and reductions in lifetime cancer incidence and mortality, but was more costly, required considerably more colonoscopies and resulted in more complications. Results were robust to variations in parameter estimates. Conclusion: Population-based screening based on FIT is expected to result in greater health gains than a policy of gFOBT (with reflex FIT) or once-only FSIG, but would require significantly more colonoscopy resources and result in more individuals experiencing adverse effects. Weighing these advantages and disadvantages presents a considerable challenge to policy makers. PMID:22343624

  7. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  8. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  9. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Feed-in tariff structure development for photovoltaic electricity and the associated benefits for the Kingdom of Bahrain

    NASA Astrophysics Data System (ADS)

    Haji, Shaker; Durazi, Amal; Al-Alawi, Yaser

    2018-05-01

    In this study, the feed-in tariff (FIT) scheme was considered to facilitate an effective introduction of renewable energy in the Kingdom of Bahrain. An economic model was developed for the estimation of feasible FIT rates for photovoltaic (PV) electricity on a residential scale. The calculations of FIT rates were based mainly on the local solar radiation, the cost of a grid-connected PV system, the operation and maintenance cost, and the provided financial support. The net present value and internal rate of return methods were selected for model evaluation with the guide of simple payback period to determine the cost of energy and feasible FIT rates under several scenarios involving different capital rebate percentages, loan down payment percentages, and PV system costs. Moreover, to capitalise on the FIT benefits, its impact on the stakeholders beyond the households was investigated in terms of natural gas savings, emissions cutback, job creation, and PV-electricity contribution towards the energy demand growth. The study recommended the introduction of the FIT scheme in the Kingdom of Bahrain due to its considerable benefits through a setup where each household would purchase the PV system through a loan, with the government and the electricity customers sharing the FIT cost.

  11. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  12. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  13. Three-dimensional deformable-model-based localization and recognition of road vehicles.

    PubMed

    Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong

    2012-01-01

    We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.

  14. Model for macroevolutionary dynamics.

    PubMed

    Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E

    2013-07-02

    The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.

  15. Assessing Chinese coach drivers' fitness to drive: The development of a toolkit based on cognition measurements.

    PubMed

    Wang, Huarong; Mo, Xian; Wang, Ying; Liu, Ruixue; Qiu, Peiyu; Dai, Jiajun

    2016-10-01

    Road traffic accidents resulting in group deaths and injuries are often related to coach drivers' inappropriate operations and behaviors. Thus, the evaluation of coach drivers' fitness to drive is an important measure for improving the safety of public transportation. Previous related research focused on drivers' age and health condition. Comprehensive studies about commercial drivers' cognitive capacities are limited. This study developed a toolkit consisting of nine cognition measurements across driver perception/sensation, attention, and reaction. A total of 1413 licensed coach drivers in Jiangsu Province, China were investigated and tested. Results indicated that drivers with accident history within three years performed overwhelmingly worse (p<0.001) on dark adaptation, dynamic visual acuity, depth perception, attention concentration, attention span, and significantly worse (p<0.05) on reaction to complex tasks compared with drivers with clear accident records. These findings supported that in the assessment of fitness to drive, cognitive capacities are sensitive to the detection of drivers with accident proneness. We first developed a simple evaluation model based on the percentile distribution of all single measurements, which defined the normal range of "fit-to-drive" by eliminating a 5% tail of each measurement. A comprehensive evaluation model was later constructed based on the kernel principal component analysis, in which the eliminated 5% tail was calculated from on integrated index. Methods to categorizing qualified, good, and excellent coach drivers and criteria for evaluating and training Chinese coach drivers' fitness to drive were also proposed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Mastoid Cavity Dimensions and Shape: Method of Measurement and Virtual Fitting of Implantable Devices

    PubMed Central

    Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.

    2009-01-01

    Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649

  17. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models.

    PubMed

    Andersson, Therese M L; Dickman, Paul W; Eloranta, Sandra; Lambert, Paul C

    2011-06-22

    When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. © 2011 Andersson et al; licensee BioMed Central Ltd.

  18. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  19. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  20. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  1. Investigating different approaches to develop informative priors in hierarchical Bayesian safety performance functions.

    PubMed

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-07-01

    The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Variation and Grey GM(1, 1) Prediction of Melting Peak Temperature of Polypropylene During Ultraviolet Radiation Aging

    NASA Astrophysics Data System (ADS)

    Chen, K.; Y Zhang, T.; Zhang, F.; Zhang, Z. R.

    2017-12-01

    Grey system theory regards uncertain system in which information is known partly and unknown partly as research object, extracts useful information from part known, and thereby revealing the potential variation rule of the system. In order to research the applicability of data-driven modelling method in melting peak temperature (T m) fitting and prediction of polypropylene (PP) during ultraviolet radiation aging, the T m of homo-polypropylene after different ultraviolet radiation exposure time investigated by differential scanning calorimeter was fitted and predicted by grey GM(1, 1) model based on grey system theory. The results show that the T m of PP declines with the prolong of aging time, and fitting and prediction equation obtained by grey GM(1, 1) model is T m = 166.567472exp(-0.00012t). Fitting effect of the above equation is excellent and the maximum relative error between prediction value and actual value of T m is 0.32%. Grey system theory needs less original data, has high prediction accuracy, and can be used to predict aging behaviour of PP.

  3. Adaptive evolutionary walks require neutral intermediates in RNA fitness landscapes.

    PubMed

    Rendel, Mark D

    2011-01-01

    In RNA fitness landscapes with interconnected networks of neutral mutations, neutral precursor mutations can play an important role in facilitating the accessibility of epistatic adaptive mutant combinations. I use an exhaustively surveyed fitness landscape model based on short sequence RNA genotypes (and their secondary structure phenotypes) to calculate the minimum rate at which mutants initially appearing as neutral are incorporated into an adaptive evolutionary walk. I show first, that incorporating neutral mutations significantly increases the number of point mutations in a given evolutionary walk when compared to estimates from previous adaptive walk models. Second, that incorporating neutral mutants into such a walk significantly increases the final fitness encountered on that walk - indeed evolutionary walks including neutral steps often reach the global optimum in this model. Third, and perhaps most importantly, evolutionary paths of this kind are often extremely winding in their nature and have the potential to undergo multiple mutations at a given sequence position within a single walk; the potential of these winding paths to mislead phylogenetic reconstruction is briefly considered. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Beyond the audiogram: application of models of auditory fitness for duty to assess communication in the real world.

    PubMed

    Dubno, Judy R

    2018-05-01

    This manuscript provides a Commentary on a paper published in the current issue of the International Journal of Audiology and the companion paper published in Ear and Hearing by Soli et al. These papers report background, rationale and results of a novel modelling approach to assess "auditory fitness for duty," or an individual's ability to perform hearing-critical tasks related to their job, based on their likelihood of effective speech communication in the listening environment in which the task is performed.

  5. Absolute Spectrophotometric Calibration to 1% from the FUV through the near-IR

    NASA Astrophysics Data System (ADS)

    Finley, David

    2006-07-01

    We are requesting additional support to complete the work now being carried out under the Cycle 14 archive program, HST-AR-10654. The most critical component of that effort is an accurate determination of the STIS spectrometer LSF, so that we may correctly model the infill of the Balmer line cores by light redistributed from the wings and adjacent continuum. That is the essential input for obtaining accurate and unbiased effective temperatures and gravities, and hence calibrated fluxes, via line profile fitting of the WD calibration standards. To evaluate the published STIS LSF, we investigated the spectral images of the calibration targets, yielding several significant results: a} the STIS LSF varies significantly; b} existing observation-based spectroscopic LSFs or imaging PSFs are inadequate for deriving suitable spectroscopic LSFs; c} accounting for the PSF/LSF variability will improve spectrophotometric accuracy; d} the LSFs used for model fits must be consistent with the extraction process details; and, e} TinyTim-generated PSFs, with some modifications, provide the most suitable basis for producing the required LSFs that are tailored to each individual spectral observation. Based on our current {greatly improved} state of knowlege of the instrumental effects, we are now requesting additional support to complete the work needed to generate correct LSFs, and then carry out the analyses that were the subject of the original proposal.Our goal is the same: to produce a significant improvement to the existing HST calibration. The current calibration is based on three primary DA white dwarf standards, GD 71, GD 153,and G 191-B2B. The standard fluxes are calculated using NLTE models, with effective temperatures and gravities that were derived from Balmer line fits using LTE models. We propose to improve the accuracy and internal consistency of the calibration by deriving corrected effective temperatures and gravities based on fitting the observed line profiles with updated NLTE models, and including the fit results from multiple STIS spectra, rather than the {usually} 1 or 2 ground-based spectra used previously. We will also determine the fluxes for 5 new, fainter primary or secondary standards, extending the standard V magnitude lower limit from 13.4 to 16.5, and extending the wavelength coverage from 0.1 to 2.5 micron. The goal is to achieve an overall flux accuracy of 1%, which will be needed, for example, for the upcoming supernova survey missions to measure the equation of state of the dark energy that is accelerating the expansion of the universe.

  6. Patterns of ecosystem services supply across farm properties: Implications for ecosystem services-based policy incentives.

    PubMed

    Nahuelhual, Laura; Benra, Felipe; Laterra, Pedro; Marin, Sandra; Arriagada, Rodrigo; Jullian, Cristobal

    2018-09-01

    In developing countries, the protection of biodiversity and ecosystem services (ES) rests on the hands of millions of small landowners that coexist with large properties, in a reality of highly unequal land distribution. Guiding the effective allocation of ES-based incentives in such contexts requires researchers and practitioners to tackle a largely overlooked question: for a given targeted area, will single large farms or several small ones provide the most ES supply? The answer to this question has important implications for conservation planning and rural development alike, which transcend efficiency to involve equity issues. We address this question by proposing and testing ES supply-area relations (ESSARs) around three basic hypothesized models, characterized by constant (model 1), increasing (model 2), and decreasing increments (model 3) of ES supply per unit of area or ES "productivity". Data to explore ESSARs came from 3384 private landholdings located in southern Chile ranging from 0.5ha to over 30,000ha and indicators of four ES (forage, timber, recreation opportunities, and water supply). Forage provision best fit model 3, which suggests that targeting several small farms to provide this ES should be a preferred choice, as compared to a single large farm. Timber provision best fit model 2, suggesting that in this case targeting a single large farm would be a more effective choice. Recreation opportunities best fit model 1, which indicates that several small or a single large farm of a comparable size would be equally effective in delivering this ES. Water provision fit model 1 or model 2 depending on the study site. The results corroborate that ES provision is not independent from property area and therefore understanding ESSARs is a necessary condition for setting conservation incentives that are both efficient (deliver the highest conservation outcome at the least cost) and fair for landowners. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. [Effects of different rootstocks on the weak light tolerance ability of summer black grape based on 4 photo-response models].

    PubMed

    Han, Xiao; Wang, Hai Bo; Wang, Xiao di; Shi, Xiang Bin; Wang, Bao Liang; Zheng, Xiao Cui; Wang, Zhi Qiang; Liu, Feng Zhi

    2017-10-01

    The photo response curves of 11 rootstock-scion combinations including summer black/Beta, summer black/1103P, summer black/101-14, summer black/3309C, summer black/140Ru, summer black/5C, summer black/5BB, summer black/420A, summer black/SO4, summer black/Kangzhen No.1, summer black/Huapu No.1 were fitted by rectangular hyperbola mo-del, non-rectangular hyperbola model, modified rectangular hyperbola model and exponential model respectively, and the differences of imitative effects were analyzed by determination coefficiency, light compensation point, light saturation point, initial quantum efficiency, maximum photosynthetic rate and dark respiration rate. The result showed that the fit coefficients of all four models were above 0.98, and there was no obvious difference on the fitted values of light compensation point among the four models. The modified rectangular hyperbola model fitted best on light saturation point, apparent quantum yield, maximum photosynthetic rate and dark respiration rate, and had the minimum AIC value based on the akaike information criterion, therefore, the modified rectangular hyperbola model was the best one. The clustering analysis indicated that summer black/SO4 and summer black/420A combinations had low light compensation point, high apparent quantum yield and low dark respiration rate among 11 rootstock-scion combinations, suggesting that these two combinations could use weak light more efficiently due to their less respiratory consumption and higher weak light tolerance. The Topsis comparison method ranked summer black/SO4 and summer black/420A combinations as No. 1 and No. 2 respectively in weak light tolerance ability, which was consistent with cluster analysis. Consequently, summer black has the highest weak light tolerance in case grafted on 420A or SO4, which could be the most suitable rootstock-scion combinations for protected cultivation.

  8. Examining the Latent Structure of the Delis-Kaplan Executive Function System.

    PubMed

    Karr, Justin E; Hofer, Scott M; Iverson, Grant L; Garcia-Barrera, Mauricio A

    2018-05-04

    The current study aimed to determine whether the Delis-Kaplan Executive Function System (D-KEFS) taps into three executive function factors (inhibition, shifting, fluency) and to assess the relationship between these factors and tests of executive-related constructs less often measured in latent variable research: reasoning, abstraction, and problem solving. Participants included 425 adults from the D-KEFS standardization sample (20-49 years old; 50.1% female; 70.1% White). Eight alternative measurement models were compared based on model fit, with test scores assigned a priori to three factors: inhibition (Color-Word Interference, Tower), shifting (Trail Making, Sorting, Design Fluency), and fluency (Verbal/Design Fluency). The Twenty Questions, Word Context, and Proverb Tests were predicted in separate structural models. The three-factor model fit the data well (CFI = 0.938; RMSEA = 0.047), although a two-factor model, with shifting and fluency merged, fit similarly well (CFI = 0.929; RMSEA = 0.048). A bifactor model fit best (CFI = 0.977; RMSEA = 0.032) and explained the most variance in shifting indicators, but rarely converged among 5,000 bootstrapped samples. When the three first-order factors simultaneously predicted the criterion variables, only shifting was uniquely predictive (p < .05; R2 = 0.246-0.408). The bifactor significantly predicted all three criterion variables (p < .001; R2 = 0.141-242). Results supported a three-factor D-KEFS model (i.e., inhibition, shifting, and fluency), although shifting and fluency were highly related (r = 0.696). The bifactor showed superior fit, but converged less often than other models. Shifting best predicted tests of reasoning, abstraction, and problem solving. These findings support the validity of D-KEFS scores for measuring executive-related constructs and provide a framework through which clinicians can interpret D-KEFS results.

  9. Statistical Compression of Wind Speed Data

    NASA Astrophysics Data System (ADS)

    Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.

    2017-12-01

    In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.

  10. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  11. Rasch analysis of the UK Functional Assessment Measure in patients with complex disability after stroke.

    PubMed

    Medvedev, Oleg N; Turner-Stokes, Lynne; Ashford, Stephen; Siegert, Richard J

    2018-02-28

    To determine whether the UK Functional Assessment Measure (UK FIM+FAM) fits the Rasch model in stroke patients with complex disability and, if so, to derive a conversion table of Rasch-transformed interval level scores. The sample included a UK multicentre cohort of 1,318 patients admitted for specialist rehabilitation following a stroke. Rasch analysis was conducted for the 30-item scale including 3 domains of items measuring physical, communication and psychosocial functions. The fit of items to the Rasch model was examined using 3 different analytical approaches referred to as "pathways". The best fit was achieved in the pathway where responses from motor, communication and psychosocial domains were summarized into 3 super-items and where some items were split because of differential item functioning (DIF) relative to left and right hemisphere location (χ2 (10) = 14.48, p = 0.15). Re-scoring of items showing disordered thresholds did not significantly improve the overall model fit. The UK FIM+FAM with domain super-items satisfies expectations of the unidimensional Rasch model without the need for re-scoring. A conversion table was produced to convert the total scale scores into interval-level data based on person estimates of the Rasch model. The clinical benefits of interval-transformed scores require further evaluation.

  12. Estimation of retinal vessel caliber using model fitting and random forests

    NASA Astrophysics Data System (ADS)

    Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio

    2017-03-01

    Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.

  13. The effect of noise-induced variance on parameter recovery from reaction times.

    PubMed

    Vadillo, Miguel A; Garaizar, Pablo

    2016-03-31

    Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.

  14. A corkscrew model for dynamin constriction.

    PubMed

    Mears, Jason A; Ray, Pampa; Hinshaw, Jenny E

    2007-10-01

    Numerous vesiculation processes throughout the eukaryotic cell are dependent on the protein dynamin, a large GTPase that constricts lipid bilayers. We have combined X-ray crystallography and cryo-electron microscopy (cryo-EM) data to generate a coherent model of dynamin-mediated membrane constriction. GTPase and pleckstrin homology domains of dynamin were fit to cryo-EM structures of human dynamin helices bound to lipid in nonconstricted and constricted states. Proteolysis and immunogold labeling experiments confirm the topology of dynamin domains predicted from the helical arrays. Based on the fitting, an observed twisting motion of the GTPase, middle, and GTPase effector domains coincides with conformational changes determined by cryo-EM. We propose a corkscrew model for dynamin constriction based on these motions and predict regions of sequence important for dynamin function as potential targets for future mutagenic and structural studies.

  15. VizieR Online Data Catalog: Galaxy stellar mass assembly (Cousin+, 2015)

    NASA Astrophysics Data System (ADS)

    Cousin, M.; Lagache, G.; Bethermin, M.; Blaizot, J.; Guiderdoni, B.

    2014-11-01

    There are five fits files corresponding to the different models: - m0 : model without any regulation process - m1 : reference model (Okamoto et al., 2008MNRAS.390..920O, photo-ionization prescription) - m2 : The Okamoto et al. (2008MNRAS.390..920O) photo-ionization prescription is replaced by Gnedin (2000ApJ...542..535G) prescription - m3 : SN ejecta processes are based on Somerville et al. (2008MNRAS.391..481S) model - m4 : Model with no-star-forming gas ad-hoc modification For each model: - galaxy properties are listed in eGalICS_m*.readme - data are saved in eGalICS_m*.fits All data "fits" files are compatible with the TOPCAT software available on: http://www.star.bris.ac.uk/~mbt/topcat/ If you used data associated to eGalICS semi-analytic model, please cite the following papers: * Cousin et al.: "Galaxy stellar mass assembly: the difficulty to match observations and semi-analytical predictions" (2015A&A...575A..32C) * Cousin et al.: "Toward a new modelling of gas flows in a semi-analytical model of galaxy formation and evolution" (2015A&A...575A..33C) (11 data files).

  16. Latent Factor Structure of DSM-5 Posttraumatic Stress Disorder

    PubMed Central

    Gentes, Emily; Dennis, Paul A.; Kimbrel, Nathan A.; Kirby, Angela C.; Hair, Lauren P.; Beckham, Jean C.; Calhoun, Patrick S.

    2015-01-01

    The current study examined the latent factor structure of posttraumatic stress disorder (PTSD) based on DSM-5 criteria in a sample of participants (N = 374) recruited for studies on trauma and health. Confirmatory factor analyses (CFA) were used to compare the fit of the previous 3-factor DSM-IV model of PTSD to the 4-factor model specified in DSM-5 as well as to a competing 4-factor “dysphoria” model (Simms, Watson, & Doebbeling, 2002) and a 5-factor (Elhai et al., 2011) model of PTSD. Results indicated that the Elhai 5-factor model (re-experiencing, active avoidance, emotional numbing, dysphoric arousal, anxious arousal) provided the best fit to the data, although substantial support was demonstrated for the DSM-5 4-factor model. Low factor loadings were noted for two of the symptoms in the DSM-5 model (psychogenic amnesia and reckless/self-destructive behavior), which raises questions regarding the adequacy of fit of these symptoms with other core features of the disorder. Overall, the findings from the present research suggest the DSM-5 model of PTSD is a significant improvement over the previous DSM-IV model of PTSD. PMID:26366290

  17. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  18. Evolutionary search for new high-k dielectric materials: methodology and applications to hafnia-based oxides.

    PubMed

    Zeng, Qingfeng; Oganov, Artem R; Lyakhov, Andriy O; Xie, Congwei; Zhang, Xiaodong; Zhang, Jin; Zhu, Qiang; Wei, Bingqing; Grigorenko, Ilya; Zhang, Litong; Cheng, Laifei

    2014-02-01

    High-k dielectric materials are important as gate oxides in microelectronics and as potential dielectrics for capacitors. In order to enable computational discovery of novel high-k dielectric materials, we propose a fitness model (energy storage density) that includes the dielectric constant, bandgap, and intrinsic breakdown field. This model, used as a fitness function in conjunction with first-principles calculations and the global optimization evolutionary algorithm USPEX, efficiently leads to practically important results. We found a number of high-fitness structures of SiO2 and HfO2, some of which correspond to known phases and some of which are new. The results allow us to propose characteristics (genes) common to high-fitness structures--these are the coordination polyhedra and their degree of distortion. Our variable-composition searches in the HfO2-SiO2 system uncovered several high-fitness states. This hybrid algorithm opens up a new avenue for discovering novel high-k dielectrics with both fixed and variable compositions, and will speed up the process of materials discovery.

  19. AssignFit: a program for simultaneous assignment and structure refinement from solid-state NMR spectra

    PubMed Central

    Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.

    2011-01-01

    AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904

  20. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. [The compatibility of housing needs and housing conditions and and its impact on experiencing attachment to a district].

    PubMed

    Hieber, A; Oswald, F; Wahl, H-W; Mollenkopf, H

    2005-08-01

    Based on the "complementary-congruence model" of person-environment (p-e) fit, this study focuses on housing in old age as an interaction between housing needs and housing conditions in urban settings. The research aims are: (1) To establish a set of housing-related p-e fit indices based on the relationship between environmental needs and existing conditions in different physical and social domains, and to describe housing among elders aged 51-80 years and in different urban districts with these indices. The study distinguishes between basic, higher-order and social needs relating to housing; (2) To explain outdoor place attachment as an indicator for quality of life in different urban districts with a set of predictors including these person-environment fit indices. Data were drawn from telephone-based interviews with 365 older adults (51-80 years) who were questioned about individual housing needs and housing conditions. Results revealed higher p-e fit scores in the domains of higher-order and social housing needs and conditions in the districts which were considered to be more pleasant but had poor access to the city and to public transportation. In contrast, age was more important in explaining differences in the domain of basic housing needs and conditions with higher p-e fit scores among older participants in all settings. In explaining outdoor place attachment, the fit between basic and social housing needs and conditions was important, but the higher-order fit did not play a role.

  2. Bak-Sneppen model: Local equilibrium and critical value.

    PubMed

    Fraiman, Daniel

    2018-04-01

    The Bak-Sneppen (BS) model is a very simple model that exhibits all the richness of self-organized criticality theory. At the thermodynamic limit, the BS model converges to a situation where all particles have a fitness that is uniformly distributed between a critical value p_{c} and 1. The p_{c} value is unknown, as are the variables that influence and determine this value. Here we study the BS model in the case in which the lowest fitness particle interacts with an arbitrary even number of m nearest neighbors. We show that p_{c} verifies a simple local equilibrium relation. Based on this relation, we can determine bounds for p_{c} of the BS model and exact results for some BS-like models. Finally, we show how transformations of the original BS model can be done without altering the model's complex dynamics.

  3. Bak-Sneppen model: Local equilibrium and critical value

    NASA Astrophysics Data System (ADS)

    Fraiman, Daniel

    2018-04-01

    The Bak-Sneppen (BS) model is a very simple model that exhibits all the richness of self-organized criticality theory. At the thermodynamic limit, the BS model converges to a situation where all particles have a fitness that is uniformly distributed between a critical value pc and 1. The pc value is unknown, as are the variables that influence and determine this value. Here we study the BS model in the case in which the lowest fitness particle interacts with an arbitrary even number of m nearest neighbors. We show that pc verifies a simple local equilibrium relation. Based on this relation, we can determine bounds for pc of the BS model and exact results for some BS-like models. Finally, we show how transformations of the original BS model can be done without altering the model's complex dynamics.

  4. The dynamics of life stressors and depressive symptoms in early adolescence: a test of six theoretical models.

    PubMed

    Clements, Margaret; Aber, J Lawrence; Seidman, Edward

    2008-01-01

    Structural equation modeling was used to compare 6 competing theoretically based psychosocial models of the longitudinal association between life stressors and depressive symptoms in a sample of early adolescents (N= 907; 40% Hispanic, 32% Black, and 19% White; mean age at Time 1 = 11.4 years). Only two models fit the data, both of which included paths modeling the effect of depressive symptoms on stressors recall: The mood-congruent cognitive bias model included only depressive symptoms to life stressors paths (DS-->S), whereas the fully transactional model included paths representing both the DS-->S and stressors to depressive symptoms (S-->DS) effects. Social causation models and the stress generation model did not fit the data. Findings demonstrate the importance of accounting for mood-congruent cognitive bias in stressors-depressive symptoms investigations.

  5. A New Search Paradigm for Correlated Neutrino Emission from Discrete GRBs using Antarctic Cherenkov Telescopes in the Swift Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamatikos, Michael; Band, David L.; JCA/UMBC, Baltimore, MD 21250

    2006-05-19

    We describe the theoretical modeling and analysis techniques associated with a preliminary search for correlated neutrino emission from GRB980703a, which triggered the Burst and Transient Source Experiment (BATSE GRB trigger 6891), using archived data from the Antarctic Muon and Neutrino Detector Array (AMANDA-B10). Under the assumption of associated hadronic acceleration, the expected observed neutrino energy flux is directly derived, based upon confronting the fireball phenomenology with the discrete set of observed electromagnetic parameters of GRB980703a, gleaned from ground-based and satellite observations, for four models, corrected for oscillations. Models 1 and 2, based upon spectral analysis featuring a prompt photon energymore » fit to the Band function, utilize an observed spectroscopic redshift, for isotropic and anisotropic emission geometry, respectively. Model 3 is based upon averaged burst parameters, assuming isotropic emission. Model 4 based upon a Band fit, features an estimated redshift from the lag-luminosity relation, with isotropic emission. Consistent with our AMANDA-II analysis of GRB030329, which resulted in a flux upper limit of {approx} 0.150GeV /cm2/s for model 1, we find differences in excess of an order of magnitude in the response of AMANDA-B10, among the various models for GRB980703a. Implications for future searches in the era of Swift and IceCube are discussed.« less

  6. An Activity-Based Non-Linear Regression Model of Sopite Syndrome and its Effects on Crew Performance in High-Speed Vessel Operations

    DTIC Science & Technology

    2009-03-01

    80 100 120 140 -0 .6 -0 .4 -0 .2 0. 0 0. 2 0. 4 0. 6 l1 fit M irr or T ra ce r M od el Figure 26. l1fit of Mirror Tracer Model To ensure model... teachers are unfair to students is nonsense. b. Most students don’t realize the extent to which their grades are influenced by accidental happenings...understand how teachers arrive at the grades they give. b. There is a direct connection between how hard 1 study and the grades I get. 24. a. A

  7. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  8. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  9. Experimental study of water desorption isotherms and thin-layer convective drying kinetics of bay laurel leaves

    NASA Astrophysics Data System (ADS)

    Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed

    2016-12-01

    The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.

  10. Fit Assessment of N95 Filtering-Facepiece Respirators in the U.S. Centers for Disease Control and Prevention Strategic National Stockpile.

    PubMed

    Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew

    National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes.

  11. Fit Assessment of N95 Filtering-Facepiece Respirators in the U.S. Centers for Disease Control and Prevention Strategic National Stockpile

    PubMed Central

    Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew

    2016-01-01

    National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes. PMID:26877587

  12. The role of area-level deprivation and gender in participation in population-based faecal immunochemical test (FIT) colorectal cancer screening.

    PubMed

    Clarke, Nicholas; McNamara, Deirdre; Kearney, Patricia M; O'Morain, Colm A; Shearer, Nikki; Sharp, Linda

    2016-12-01

    This study aimed to investigate the effects of sex and deprivation on participation in a population-based faecal immunochemical test (FIT) colorectal cancer screening programme. The study population included 9785 individuals invited to participate in two rounds of a population-based biennial FIT-based screening programme, in a relatively deprived area of Dublin, Ireland. Explanatory variables included in the analysis were sex, deprivation category of area of residence and age (at end of screening). The primary outcome variable modelled was participation status in both rounds combined (with "participation" defined as having taken part in either or both rounds of screening). Poisson regression with a log link and robust error variance was used to estimate relative risks (RR) for participation. As a sensitivity analysis, data were stratified by screening round. In both the univariable and multivariable models deprivation was strongly associated with participation. Increasing affluence was associated with higher participation; participation was 26% higher in people resident in the most affluent compared to the most deprived areas (multivariable RR=1.26: 95% CI 1.21-1.30). Participation was significantly lower in males (multivariable RR=0.96: 95%CI 0.95-0.97) and generally increased with increasing age (trend per age group, multivariable RR=1.02: 95%CI, 1.01-1.02). No significant interactions between the explanatory variables were found. The effects of deprivation and sex were similar by screening round. Deprivation and male gender are independently associated with lower uptake of population-based FIT colorectal cancer screening, even in a relatively deprived setting. Development of evidence-based interventions to increase uptake in these disadvantaged groups is urgently required. Copyright © 2016. Published by Elsevier Inc.

  13. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  14. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  15. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  16. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  17. Improving the Validity of Activity of Daily Living Dependency Risk Assessment

    PubMed Central

    Clark, Daniel O.; Stump, Timothy E.; Tu, Wanzhu; Miller, Douglas K.

    2015-01-01

    Objectives Efforts to prevent activity of daily living (ADL) dependency may be improved through models that assess older adults’ dependency risk. We evaluated whether cognition and gait speed measures improve the predictive validity of interview-based models. Method Participants were 8,095 self-respondents in the 2006 Health and Retirement Survey who were aged 65 years or over and independent in five ADLs. Incident ADL dependency was determined from the 2008 interview. Models were developed using random 2/3rd cohorts and validated in the remaining 1/3rd. Results Compared to a c-statistic of 0.79 in the best interview model, the model including cognitive measures had c-statistics of 0.82 and 0.80 while the best fitting gait speed model had c-statistics of 0.83 and 0.79 in the development and validation cohorts, respectively. Conclusion Two relatively brief models, one that requires an in-person assessment and one that does not, had excellent validity for predicting incident ADL dependency but did not significantly improve the predictive validity of the best fitting interview-based models. PMID:24652867

  18. RBF kernel based support vector regression to estimate the blood volume and heart rate responses during hemodialysis.

    PubMed

    Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-01-01

    This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).

  19. Development of the Internet addiction scale based on the Internet Gaming Disorder criteria suggested in DSM-5.

    PubMed

    Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin

    2014-09-01

    This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.

  20. Physical Fitness and Aortic Stiffness Explain the Reduced Cognitive Performance Associated with Increasing Age in Older People.

    PubMed

    Kennedy, Greg; Meyer, Denny; Hardman, Roy J; Macpherson, Helen; Scholey, Andrew B; Pipingas, Andrew

    2018-01-01

    Greater physical fitness is associated with reduced rates of cognitive decline in older people; however, the mechanisms by which this occurs are still unclear. One potential mechanism is aortic stiffness, with increased stiffness resulting in higher pulsatile pressures reaching the brain and possibly causing progressive micro-damage. There is limited evidence that those who regularly exercise may have lower aortic stiffness. To investigate whether greater fitness and lower aortic stiffness predict better cognitive performance in older people and, if so, whether aortic stiffness mediates the relationship between fitness and cognition. Residents of independent living facilities, aged 60-90, participated in the study (N = 102). Primary measures included a computerized cognitive assessment battery, pulse wave velocity analysis to measure aortic stiffness, and the Six-Minute Walk test to assess fitness. Based on hierarchical regression analyses, structural equation modelling was used to test the mediation hypothesis. Both fitness and aortic stiffness independently predicted Spatial Working Memory (SWM) performance, however no mediating relationship was found. Additionally, the derived structural equation model shows that, in conjunction with BMI and sex, fitness and aortic stiffness explain 33% of the overall variation in SWM, with age no longer directly predicting any variation. Greater fitness and lower aortic stiffness both independently predict better SWM in older people. The strong effect of age on cognitive performance is totally mediated by fitness and aortic stiffness. This suggests that addressing both physical fitness and aortic stiffness may be important to reduce the rate of age associated cognitive decline.

  1. Simultaneous fits in ISIS on the example of GRO J1008-57

    NASA Astrophysics Data System (ADS)

    Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern

    2015-04-01

    Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.

  2. Is the effect of person-organisation fit on turnover intention mediated by job satisfaction? A survey of community health workers in China

    PubMed Central

    Yan, Fei; Wang, Wei; Li, Guohong

    2017-01-01

    Objectives Person-organisation fit (P-O fit) is a predictor of work attitude. However, in the area of human resource for health, the literature of P-O fit is quite limited. It is unclear whether P-O fit directly or indirectly affects turnover intention. This study aims to examine the mediation effect of job satisfaction on the relationship between P-O fit and turnover intention based on data from China. Design and methods This is a cross-sectional survey of community health workers (CHWs) in China in 2013. A questionnaire of P-O fit, job satisfaction and turnover intention was developed, and its validity and reliability were assessed. Multiple regression and structural equation modelling were used to examine the relationship among P-O fit, job satisfaction and turnover intention. Setting and participants Multistage sampling was applied. In total, 656 valid questionnaire responses were collected from CHWs in four provincial regions in China, namely Shanghai, Shaanxi, Shandong and Anhui. Results P-O fit was directly related to job satisfaction (standardised β 0.246) and inversely related to turnover intention (standardised β −0.186). In the mediation model, the total effect of P-O fit on turnover intention was −0.186 (p<0.001); the direct effect of P-O fit on turnover intention was −0.094 (p<0.01); the indirect effect of job satisfaction on the relationship between P-O fit and turnover intention was −0.092 (p<0.001). Conclusions The effect of P-O fit on turnover intention was partially mediated through job satisfaction. It is suggested that more work attitude variables and different dimensions of P-O fit be taken into account to examine the complete mechanism of person-organisation interaction. Indirect measures of P-O fit should be encouraged in practice to enhance work attitudes of health workers. PMID:28399513

  3. Evolution of the Marine Officer Fitness Report: A Multivariate Analysis

    DTIC Science & Technology

    This thesis explores the evaluation behavior of United States Marine Corps (USMC) Reporting Seniors (RSs) from 2010 to 2017. Using fitness report...RSs evaluate the performance of subordinate active component unrestricted officer MROs over time. I estimate logistic regression models of the...lowest. However, these correlations indicating the effects of race matching on FITREP evaluations narrow in significance when performance-based factors

  4. An Item Fit Statistic Based on Pseudocounts from the Generalized Graded Unfolding Model: A Preliminary Report.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…

  5. A Case-Based Exploration of Task/Technology Fit in a Knowledge Management Context

    DTIC Science & Technology

    2008-03-01

    have a difficult time articulating to others. Researchers who subscribe to the constructionist perspective view knowledge as an inherently social ...Acceptance Model With Task-Technology Fit Constructs. Information & Management, 36, 9-21. Dooley, D. (2001). Social Research Methods (4th ed.). Upper...L. (2006). Social Research Methods : Qualitative and Quantitative Approaches (6 ed.). Boston: Pearson Education, Inc. Nonaka, I. (1994). A Dynamic

  6. Participation in fitness-related activities of an incentive-based health promotion program and hospital costs: a retrospective longitudinal study.

    PubMed

    Patel, Deepak; Lambert, Estelle V; da Silva, Roseanne; Greyling, Mike; Kolbe-Alexander, Tracy; Noach, Adam; Conradie, Jaco; Nossel, Craig; Borresen, Jill; Gaziano, Thomas

    2011-01-01

    A retrospective, longitudinal study examined changes in participation in fitness-related activities and hospital claims over 5 years amongst members of an incentivized health promotion program offered by a private health insurer. A 3-year retrospective observational analysis measuring gym visits and participation in documented fitness-related activities, probability of hospital admission, and associated costs of admission. A South African private health plan, Discovery Health and the Vitality health promotion program. 304,054 adult members of the Discovery medical plan, 192,467 of whom registered for the health promotion program and 111,587 members who were not on the program. Members were incentivised for fitness-related activities on the basis of the frequency of gym visits. Changes in electronically documented gym visits and registered participation in fitness-related activities over 3 years and measures of association between changes in participation (years 1-3) and subsequent probability and costs of hospital admission (years 4-5). Hospital admissions and associated costs are based on claims extracted from the health insurer database. The probability of a claim modeled by using linear logistic regression and costs of claims examined by using general linear models. Propensity scores were estimated and included age, gender, registration for chronic disease benefits, plan type, and the presence of a claim during the transition period, and these were used as covariates in the final model. There was a significant decrease in the prevalence of inactive members (76% to 68%) over 5 years. Members who remained highly active (years 1-3) had a lower probability (p < .05) of hospital admission in years 4 to 5 (20.7%) compared with those who remained inactive (22.2%). The odds of admission were 13% lower for two additional gym visits per week (odds ratio, .87; 95% confidence interval [CI], .801-.949). We observed an increase in fitness-related activities over time amongst members of this incentive-based health promotion program, which was associated with a lower probability of hospital admission and lower hospital costs in the subsequent 2 years. Copyright © 2011 by American Journal of Health Promotion, Inc.

  7. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  8. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  9. Assessment of corneal properties based on statistical modeling of OCT speckle.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.

  10. Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause

    NASA Technical Reports Server (NTRS)

    Tsyganenko, N. A.

    1995-01-01

    Empirical data-based models of the magnetosphereic magnetic field have been widely used during recent years. However, the existing models (Tsyganenko, 1987, 1989a) have three serious deficiencies: (1) an unstable de facto magnetopause, (2) a crude parametrization by the K(sub p) index, and (3) inaccuracies in the equatorial magnetotail B(sub z) values. This paper describes a new approach to the problem; the essential new features are (1) a realistic shape and size of the magnetopause, based on fits to a large number of observed crossing (allowing a parametrization by the solar wind pressure), (2) fully controlled shielding of the magnetic field produced by all magnetospheric current systems, (3) new flexible representations for the tail and ring currents, and (4) a new directional criterion for fitting the model field to spacecraft data, providing improved accuracy for field line mapping. Results are presented from initial efforts to create models assembled from these modules and calibrated against spacecraft data sets.

  11. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  12. Three-dimensional simulation of human teeth and its application in dental education and research.

    PubMed

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.

  13. Three-dimensional simulation of human teeth and its application in dental education and research

    PubMed Central

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836

  14. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  15. Analysis of Asymmetry by a Slide-Vector.

    ERIC Educational Resources Information Center

    Zielman, Berrie; Heiser, Willem J.

    1993-01-01

    An algorithm based on the majorization theory of J. de Leeuw and W. J. Heiser is presented for fitting the slide-vector model. It views the model as a constrained version of the unfolding model. A three-way variant is proposed, and two examples from market structure analysis are presented. (SLD)

  16. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    PubMed

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies conservation planning. Journal compilation © 2010 Society for Conservation Biology. No claim to original US government works.

  17. A minimum stochastic model evaluating the interplay between population density and drift for species coexistence

    NASA Astrophysics Data System (ADS)

    Guariento, Rafael Dettogni; Caliman, Adriano

    2017-02-01

    Despite the general acknowledgment of the role of niche and stochastic process in community dynamics, the role of species relative abundances according to both perspectives may have different effects regarding coexistence patterns. In this study, we explore a minimum probabilistic stochastic model to determine the relationship of populations relative and total abundances with species chances to outcompete each other and their persistence in time (i.e., unstable coexistence). Our model is focused on the effects drift (i.e., random sampling of recruitment) under different scenarios of selection (i.e., fitness differences between species). Our results show that taking into account the stochasticity in demographic properties and conservation of individuals in closed communities (zero-sum assumption), initial population abundance can strongly influence species chances to outcompete each other, despite fitness inequalities between populations, and also, influence the period of coexistence of these species in a particular time interval. Systems carrying capacity can have an important role in species coexistence by exacerbating fitness inequalities and affecting the size of the period of coexistence. Overall, the simple stochastic formulation used in this study demonstrated that populations initial abundances could act as an equalizing mechanism, reducing fitness inequalities, which can favor species coexistence and even make less fitted species to be more likely to outcompete better-fitted species, and thus to dominate ecological communities in the absence of niche mechanisms. Although our model is restricted to a pair of interacting species, and overall conclusions are already predicted by the Neutral Theory of Biodiversity, our main objective was to derive a model that can explicitly show the functional relationship between population densities and community mono-dominance odds. Overall, our study provides a straightforward understanding of how a stochastic process (i.e., drift) may affect the expected outcome based on species selection (i.e., fitness inequalities among species) and the resulting outcome regarding unstable coexistence among species.

  18. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  19. In silico modelling of directed evolution: Implications for experimental design and stepwise evolution.

    PubMed

    Wedge, David C; Rowe, William; Kell, Douglas B; Knowles, Joshua

    2009-03-07

    We model the process of directed evolution (DE) in silico using genetic algorithms. Making use of the NK fitness landscape model, we analyse the effects of mutation rate, crossover and selection pressure on the performance of DE. A range of values of K, the epistatic interaction of the landscape, are considered, and high- and low-throughput modes of evolution are compared. Our findings suggest that for runs of or around ten generations' duration-as is typical in DE-there is little difference between the way in which DE needs to be configured in the high- and low-throughput regimes, nor across different degrees of landscape epistasis. In all cases, a high selection pressure (but not an extreme one) combined with a moderately high mutation rate works best, while crossover provides some benefit but only on the less rugged landscapes. These genetic algorithms were also compared with a "model-based approach" from the literature, which uses sequential fixing of the problem parameters based on fitting a linear model. Overall, we find that purely evolutionary techniques fare better than do model-based approaches across all but the smoothest landscapes.

  20. N* resonances from KΛ amplitudes in sliced bins in energy

    NASA Astrophysics Data System (ADS)

    Anisovich, A. V.; Burkert, V.; Hadžimehmedović, M.; Ireland, D. G.; Klempt, E.; Nikonov, V. A.; Omerović, R.; Sarantsev, A. V.; Stahov, J.; Švarc, A.; Thoma, U.

    2017-12-01

    The two reactions γ p→ K+Λ and π- p→ K0Λ are analyzed to determine the leading photoproduction multipoles and the pion-induced partial wave amplitudes in slices of the invariant mass. The multipoles and the partial-wave amplitudes are simultaneously fitted in a multichannel Laurent+Pietarinen model (L+P model), which determines the poles in the complex energy plane on the second Riemann sheet close to the physical axes. The results from the L+P fit are compared with the results of an energy-dependent fit based on the Bonn-Gatchina (BnGa) approach. The study confirms the existence of several poles due to nucleon resonances in the region at about 1.9 GeV with quantum numbers JP = 1/2+, 3/2+, 1/2-, 3/2-, 5/2-.

  1. A nonparametric smoothing method for assessing GEE models with longitudinal binary data.

    PubMed

    Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu

    2008-09-30

    Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.

  2. A metabolism-based whole lake eutrophication model to estimate the magnitude and time scales of the effects of restoration in Upper Klamath Lake, south-central Oregon

    USGS Publications Warehouse

    Wherry, Susan A.; Wood, Tamara M.

    2018-04-27

    A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent reductions in tributary nutrient loads. The long-term mean water column concentration of total phosphorus was reduced by 9, 18, and 36 percent, respectively, in response to these load reductions. The long-term water column chlorophyll a concentration was reduced by 4, 13, and 44 percent, respectively. The adjustment to a new equilibrium between the water column and sediments occurred over about 30 years.

  3. Three-dimensional quantitative structure-activity relationship (3D QSAR) and pharmacophore elucidation of tetrahydropyran derivatives as serotonin and norepinephrine transporter inhibitors

    NASA Astrophysics Data System (ADS)

    Kharkar, Prashant S.; Reith, Maarten E. A.; Dutta, Aloke K.

    2008-01-01

    Three-dimensional quantitative structure-activity relationship (3D QSAR) using comparative molecular field analysis (CoMFA) was performed on a series of substituted tetrahydropyran (THP) derivatives possessing serotonin (SERT) and norepinephrine (NET) transporter inhibitory activities. The study aimed to rationalize the potency of these inhibitors for SERT and NET as well as the observed selectivity differences for NET over SERT. The dataset consisted of 29 molecules, of which 23 molecules were used as the training set for deriving CoMFA models for SERT and NET uptake inhibitory activities. Superimpositions were performed using atom-based fitting and 3-point pharmacophore-based alignment. Two charge calculation methods, Gasteiger-Hückel and semiempirical PM3, were tried. Both alignment methods were analyzed in terms of their predictive abilities and produced comparable results with high internal and external predictivities. The models obtained using the 3-point pharmacophore-based alignment outperformed the models with atom-based fitting in terms of relevant statistics and interpretability of the generated contour maps. Steric fields dominated electrostatic fields in terms of contribution. The selectivity analysis (NET over SERT), though yielded models with good internal predictivity, showed very poor external test set predictions. The analysis was repeated with 24 molecules after systematically excluding so-called outliers (5 out of 29) from the model derivation process. The resulting CoMFA model using the atom-based fitting exhibited good statistics and was able to explain most of the selectivity (NET over SERT)-discriminating factors. The presence of -OH substituent on the THP ring was found to be one of the most important factors governing the NET selectivity over SERT. Thus, a 4-point NET-selective pharmacophore, after introducing this newly found H-bond donor/acceptor feature in addition to the initial 3-point pharmacophore, was proposed.

  4. Systematic evaluation of a time-domain Monte Carlo fitting routine to estimate the adult brain optical properties

    NASA Astrophysics Data System (ADS)

    Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.

    2013-03-01

    Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.

  5. Risk factors and short-term projections for serotype-1 poliomyelitis incidence in Pakistan: A spatiotemporal analysis.

    PubMed

    Molodecky, Natalie A; Blake, Isobel M; O'Reilly, Kathleen M; Wadood, Mufti Zubair; Safdar, Rana M; Wesolowski, Amy; Buckee, Caroline O; Bandyopadhyay, Ananda S; Okayasu, Hiromasa; Grassly, Nicholas C

    2017-06-01

    Pakistan currently provides a substantial challenge to global polio eradication, having contributed to 73% of reported poliomyelitis in 2015 and 54% in 2016. A better understanding of the risk factors and movement patterns that contribute to poliovirus transmission across Pakistan would support evidence-based planning for mass vaccination campaigns. We fit mixed-effects logistic regression models to routine surveillance data recording the presence of poliomyelitis associated with wild-type 1 poliovirus in districts of Pakistan over 6-month intervals between 2010 to 2016. To accurately capture the force of infection (FOI) between districts, we compared 6 models of population movement (adjacency, gravity, radiation, radiation based on population density, radiation based on travel times, and mobile-phone based). We used the best-fitting model (based on the Akaike Information Criterion [AIC]) to produce 6-month forecasts of poliomyelitis incidence. The odds of observing poliomyelitis decreased with improved routine or supplementary (campaign) immunisation coverage (multivariable odds ratio [OR] = 0.75, 95% confidence interval [CI] 0.67-0.84; and OR = 0.75, 95% CI 0.66-0.85, respectively, for each 10% increase in coverage) and increased with a higher rate of reporting non-polio acute flaccid paralysis (AFP) (OR = 1.13, 95% CI 1.02-1.26 for a 1-unit increase in non-polio AFP per 100,000 persons aged <15 years). Estimated movement of poliovirus-infected individuals was associated with the incidence of poliomyelitis, with the radiation model of movement providing the best fit to the data. Six-month forecasts of poliomyelitis incidence by district for 2013-2016 showed good predictive ability (area under the curve range: 0.76-0.98). However, although the best-fitting movement model (radiation) was a significant determinant of poliomyelitis incidence, it did not improve the predictive ability of the multivariable model. Overall, in Pakistan the risk of polio cases was predicted to reduce between July-December 2016 and January-June 2017. The accuracy of the model may be limited by the small number of AFP cases in some districts. Spatiotemporal variation in immunization performance and population movement patterns are important determinants of historical poliomyelitis incidence in Pakistan; however, movement dynamics were less influential in predicting future cases, at a time when the polio map is shrinking. Results from the regression models we present are being used to help plan vaccination campaigns and transit vaccination strategies in Pakistan.

  6. Risk factors and short-term projections for serotype-1 poliomyelitis incidence in Pakistan: A spatiotemporal analysis

    PubMed Central

    Molodecky, Natalie A.; Buckee, Caroline O.; Okayasu, Hiromasa; Grassly, Nicholas C.

    2017-01-01

    Background Pakistan currently provides a substantial challenge to global polio eradication, having contributed to 73% of reported poliomyelitis in 2015 and 54% in 2016. A better understanding of the risk factors and movement patterns that contribute to poliovirus transmission across Pakistan would support evidence-based planning for mass vaccination campaigns. Methods and findings We fit mixed-effects logistic regression models to routine surveillance data recording the presence of poliomyelitis associated with wild-type 1 poliovirus in districts of Pakistan over 6-month intervals between 2010 to 2016. To accurately capture the force of infection (FOI) between districts, we compared 6 models of population movement (adjacency, gravity, radiation, radiation based on population density, radiation based on travel times, and mobile-phone based). We used the best-fitting model (based on the Akaike Information Criterion [AIC]) to produce 6-month forecasts of poliomyelitis incidence. The odds of observing poliomyelitis decreased with improved routine or supplementary (campaign) immunisation coverage (multivariable odds ratio [OR] = 0.75, 95% confidence interval [CI] 0.67–0.84; and OR = 0.75, 95% CI 0.66–0.85, respectively, for each 10% increase in coverage) and increased with a higher rate of reporting non-polio acute flaccid paralysis (AFP) (OR = 1.13, 95% CI 1.02–1.26 for a 1-unit increase in non-polio AFP per 100,000 persons aged <15 years). Estimated movement of poliovirus-infected individuals was associated with the incidence of poliomyelitis, with the radiation model of movement providing the best fit to the data. Six-month forecasts of poliomyelitis incidence by district for 2013–2016 showed good predictive ability (area under the curve range: 0.76–0.98). However, although the best-fitting movement model (radiation) was a significant determinant of poliomyelitis incidence, it did not improve the predictive ability of the multivariable model. Overall, in Pakistan the risk of polio cases was predicted to reduce between July–December 2016 and January–June 2017. The accuracy of the model may be limited by the small number of AFP cases in some districts. Conclusions Spatiotemporal variation in immunization performance and population movement patterns are important determinants of historical poliomyelitis incidence in Pakistan; however, movement dynamics were less influential in predicting future cases, at a time when the polio map is shrinking. Results from the regression models we present are being used to help plan vaccination campaigns and transit vaccination strategies in Pakistan. PMID:28604777

  7. Comparison of Two Methods for Calculating the Frictional Properties of Articular Cartilage Using a Simple Pendulum and Intact Mouse Knee Joints

    PubMed Central

    Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.

    2009-01-01

    In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680

  8. Physical fitness predicts technical-tactical and time-motion profile in simulated Judo and Brazilian Jiu-Jitsu matches

    PubMed Central

    Gentil, Paulo; Bueno, João C.A.; Follmer, Bruno; Marques, Vitor A.; Del Vecchio, Fabrício B.

    2018-01-01

    Background Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. Methods The sample consisted of Judo (n = 16) and BJJ (n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. Results The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. Discussion In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights. PMID:29844991

  9. Cost effectiveness and projected national impact of colorectal cancer screening in France.

    PubMed

    Hassan, C; Benamouzig, R; Spada, C; Ponchon, T; Zullo, A; Saurin, J C; Costamagna, G

    2011-09-01

    Colorectal cancer (CRC) is a major cause of morbidity and mortality in France. Only scanty data on cost-effectiveness of CRC screening in Europe are available, generating uncertainty over its efficiency. Although immunochemical fecal tests (FIT) and guaiac-based fecal occult blood tests (g-FOBT) have been shown to be cost-effective in France, cost-effectiveness of endoscopic screening has not yet been addressed. Cost-effectiveness of screening strategies using colonoscopy, flexible sigmoidoscopy, second-generation colon capsule endoscopy (CCE), FIT and g-FOBT were compared using a Markov model. A 40 % adherence rate was assumed for all strategies. Colonoscopy costs included anesthesiologist assistance. Incremental cost-effectiveness ratios (ICERs) were calculated. Probabilistic and value-of-information analyses were used to estimate the expected benefit of future research. A third-payer perspective was adopted. In the reference case analysis, FIT repeated every year was the most cost-effective strategy, with an ICER of €48165 per life-year gained vs. FIT every 2 years, which was the next most cost-effective strategy. Although CCE every 5 years was as effective as FIT 1-year, it was not a cost-effective alternative. Colonoscopy repeated every 10 years was substantially more costly, and slightly less effective than FIT 1-year. When projecting the model outputs onto the French population, the least (g-FOBT 2-years) and most (FIT 1-year) effective strategies reduced the absolute number of annual CRC deaths from 16037 to 12916 and 11217, respectively, resulting in an annual additional cost of €26 million and €347 million, respectively. Probabilistic sensitivity analysis demonstrated that FIT 1-year was the optimal choice in 20% of the simulated scenarios, whereas sigmoidoscopy 5-years, colonoscopy, and FIT 2-years were the optimal choices in 40%, 26%, and 14%, respectively. A screening program based on FIT 1-year appeared to be the most cost-effective approach for CRC screening in France. However, a substantial uncertainty over this choice is still present. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Effect of the SOS response on the mean fitness of unicellular populations: a quasispecies approach.

    PubMed

    Kama, Amit; Tannenbaum, Emmanuel

    2010-11-30

    The goal of this paper is to develop a mathematical model that analyzes the selective advantage of the SOS response in unicellular organisms. To this end, this paper develops a quasispecies model that incorporates the SOS response. We consider a unicellular, asexually replicating population of organisms, whose genomes consist of a single, double-stranded DNA molecule, i.e. one chromosome. We assume that repair of post-replication mismatched base-pairs occurs with probability , and that the SOS response is triggered when the total number of mismatched base-pairs is at least . We further assume that the per-mismatch SOS elimination rate is characterized by a first-order rate constant . For a single fitness peak landscape where the master genome can sustain up to mismatches and remain viable, this model is analytically solvable in the limit of infinite sequence length. The results, which are confirmed by stochastic simulations, indicate that the SOS response does indeed confer a fitness advantage to a population, provided that it is only activated when DNA damage is so extensive that a cell will die if it does not attempt to repair its DNA.

  11. Total Force Fitness in units part 1: military demand-resource model.

    PubMed

    Bates, Mark J; Fallesen, Jon J; Huey, Wesley S; Packard, Gary A; Ryan, Diane M; Burke, C Shawn; Smith, David G; Watola, Daniel J; Pinder, Evette D; Yosick, Todd M; Estrada, Armando X; Crepeau, Loring; Bowles, Stephen V

    2013-11-01

    The military unit is a critical center of gravity in the military's efforts to enhance resilience and the health of the force. The purpose of this article is to augment the military's Total Force Fitness (TFF) guidance with a framework of TFF in units. The framework is based on a Military Demand-Resource model that highlights the dynamic interactions across demands, resources, and outcomes. A joint team of subject-matter experts identified key variables representing unit fitness demands, resources, and outcomes. The resulting framework informs and supports leaders, support agencies, and enterprise efforts to strengthen TFF in units by (1) identifying TFF unit variables aligned with current evidence and operational practices, (2) standardizing communication about TFF in units across the Department of Defense enterprise in a variety of military organizational contexts, (3) improving current resources including evidence-based actions for leaders, (4) identifying and addressing of gaps, and (5) directing future research for enhancing TFF in units. These goals are intended to inform and enhance Service efforts to develop Service-specific TFF models, as well as provide the conceptual foundation for a follow-on article about TFF metrics for units. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  12. Influence of physical fitness on cardio-metabolic risk factors in European children. The IDEFICS study.

    PubMed

    Zaqout, M; Michels, N; Bammann, K; Ahrens, W; Sprengeler, O; Molnar, D; Hadjigeorgiou, C; Eiben, G; Konstabel, K; Russo, P; Jiménez-Pavón, D; Moreno, L A; De Henauw, S

    2016-07-01

    The aim of the study was to assess the associations of individual and combined physical fitness components with single and clustering of cardio-metabolic risk factors in children. This 2-year longitudinal study included a total of 1635 European children aged 6-11 years. The test battery included cardio-respiratory fitness (20-m shuttle run test), upper-limb strength (handgrip test), lower-limb strength (standing long jump test), balance (flamingo test), flexibility (back-saver sit-and-reach) and speed (40-m sprint test). Metabolic risk was assessed through z-score standardization using four components: waist circumference, blood pressure (systolic and diastolic), blood lipids (triglycerides and high-density lipoprotein) and insulin resistance (homeostasis model assessment). Mixed model regression analyses were adjusted for sex, age, parental education, sugar and fat intake, and body mass index. Physical fitness was inversely associated with clustered metabolic risk (P<0.001). All coefficients showed a higher clustered metabolic risk with lower physical fitness, except for upper-limb strength (β=0.057; P=0.002) where the opposite association was found. Cardio-respiratory fitness (β=-0.124; P<0.001) and lower-limb strength (β=-0.076; P=0.002) were the most important longitudinal determinants. The effects of cardio-respiratory fitness were even independent of the amount of vigorous-to-moderate activity (β=-0.059; P=0.029). Among all the metabolic risk components, blood pressure seemed not well predicted by physical fitness, while waist circumference, blood lipids and insulin resistance all seemed significantly predicted by physical fitness. Poor physical fitness in children is associated with the development of cardio-metabolic risk factors. Based on our results, this risk might be modified by improving mainly cardio-respiratory fitness and lower-limb muscular strength.

  13. Determining the turnover time of groundwater systems with the aid of environmental tracers. 1. Models and their applicability

    NASA Astrophysics Data System (ADS)

    Małoszewski, P.; Zuber, A.

    1982-06-01

    Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.

  14. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  15. Alcohol consumption and cardiorespiratory fitness in five population-based studies.

    PubMed

    Baumeister, Sebastian E; Finger, Jonas D; Gläser, Sven; Dörr, Marcus; Markus, Marcello Rp; Ewert, Ralf; Felix, Stephan B; Grabe, Hans-Jörgen; Bahls, Martin; Mensink, Gert Bm; Völzke, Henry; Piontek, Katharina; Leitzmann, Michael F

    2018-01-01

    Background Poor cardiorespiratory fitness is a risk factor for cardiovascular morbidity. Alcohol consumption contributes substantially to the burden of disease, but its association with cardiorespiratory fitness is not well described. We examined associations between average alcohol consumption, heavy episodic drinking and cardiorespiratory fitness. Design The design of this study was as a cross-sectional population-based random sample. Methods We analysed data from five independent population-based studies (Study of Health in Pomerania (2008-2012); German Health Interview and Examination Survey (2008-2011); US National Health and Nutrition Examination Survey (NHANES) 1999-2000; NHANES 2001-2002; NHANES 2003-2004) including 7358 men and women aged 20-85 years, free of lung disease or asthma. Cardiorespiratory fitness, quantified by peak oxygen uptake, was assessed using exercise testing. Information regarding average alcohol consumption (ethanol in grams per day (g/d)) and heavy episodic drinking (5+ or 6+ drinks/occasion) was obtained from self-reports. Fractional polynomial regression models were used to determine the best-fitting dose-response relationship. Results Average alcohol consumption displayed an inverted U-type relation with peak oxygen uptake ( p-value<0.0001), after adjustment for age, sex, education, smoking and physical activity. Compared to individuals consuming 10 g/d (moderate consumption), current abstainers and individuals consuming 50 and 60 g/d had significantly lower peak oxygen uptake values (ml/kg/min) (β coefficients = -1.90, β = -0.06, β = -0.31, respectively). Heavy episodic drinking was not associated with peak oxygen uptake. Conclusions Across multiple adult population-based samples, moderate drinkers displayed better fitness than current abstainers and individuals with higher average alcohol consumption.

  16. A comparison of methods of fitting several models to nutritional response data.

    PubMed

    Vedenov, D; Pesti, G M

    2008-02-01

    A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.

  17. Protonation of Different Goethite Surfaces - Unified Models for NaNO3 and NaCl Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutzenkirchen, Johannes; Boily, Jean F.; Gunneriusson, Lars

    2008-01-01

    Acid-base titration data for two goethites samples in sodium nitrate and sodium chloride media are discussed. The data are modelled based on various surface complexation models in the framework of the MUlti SIte Complexation (MUSIC) model. Various assumptions with respect to the goethite morphology are considered in determining the site density of the surface functional groups. The results from the various model applications are not statistically significant in terms of goodness of fit. More importantly, various published assumptions with respect to the goethite morphology (i.e. the contributions of different crystal planes and their repercussions on the “overall” site densities ofmore » the various surface functional groups) do not significantly affect the final model parameters. The simultaneous fit of the chloride and nitrate data results in electrolyte binding constants, which are applicable over a wide range of electrolyte concentrations including mixtures of chloride and nitrate. Model parameters for the high surface area goethite sample are in excellent agreement with parameters that were independently obtained by another group on different goethite titration data sets.« less

  18. Evaluating a health behaviour model for persons with and without an intellectual disability.

    PubMed

    Brehmer-Rinderer, B; Zigrovic, L; Weber, G

    2014-06-01

    Based on the idea of the Common Sense Model of Illness Representations by Leventhal as well as Lohaus's concepts of health and illness, a health behaviour model was designed to explain health behaviours applied by persons with intellectual disabilities (ID). The key proposal of this model is that the way someone understands the concepts of health, illness and disability influences the way they perceive themselves and what behavioural approaches to them they take. To test this model and explain health differences between the general population and person with ID, 230 people with ID and a comparative sample of 533 persons without ID were included in this Austrian study. Data were collected on general socio-demographics, personal perceptions of illness and disability, perceptions of oneself and health-related behaviours. Psychometric analysis of the instruments used showed that they were valid and reliable and hence can provide a valuable tool for studying health-related issues in persons with and without ID. With respect to the testing of the suggested health model, two latent variables were defined in accordance to the theory. The general model fit was evaluated by calculating different absolute and descriptive fit indices. Most indices indicated an acceptable model fit for all samples. This study presents the first attempt to explore the systematic differences in health behaviour between people with and without ID based on a suggested health model. Limitations of the study as well as implications for practice and future research are discussed. © 2013 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  19. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  20. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  1. Adaptive and non-adaptive models of depression: A comparison using register data on antidepressant medication during divorce

    PubMed Central

    Fawcett, Tim W.; Higginson, Andrew D.; Metsä-Simola, Niina; Hagen, Edward H.; Houston, Alasdair I.; Martikainen, Pekka

    2017-01-01

    Divorce is associated with an increased probability of a depressive episode, but the causation of events remains unclear. Adaptive models of depression propose that depression is a social strategy in part, whereas non-adaptive models tend to propose a diathesis-stress mechanism. We compare an adaptive evolutionary model of depression to three alternative non-adaptive models with respect to their ability to explain the temporal pattern of depression around the time of divorce. Register-based data (304,112 individuals drawn from a random sample of 11% of Finnish people) on antidepressant purchases is used as a proxy for depression. This proxy affords an unprecedented temporal resolution (a 3-monthly prevalence estimates over 10 years) without any bias from non-compliance, and it can be linked with underlying episodes via a statistical model. The evolutionary-adaptation model (all time periods with risk of divorce are depressogenic) was the best quantitative description of the data. The non-adaptive stress-relief model (period before divorce is depressogenic and period afterwards is not) provided the second best quantitative description of the data. The peak-stress model (periods before and after divorce can be depressogenic) fit the data less well, and the stress-induction model (period following divorce is depressogenic and the preceding period is not) did not fit the data at all. The evolutionary model was the most detailed mechanistic description of the divorce-depression link among the models, and the best fit in terms of predicted curvature; thus, it offers most rigorous hypotheses for further study. The stress-relief model also fit very well and was the best model in a sensitivity analysis, encouraging development of more mechanistic models for that hypothesis. PMID:28614385

  2. Adaptive and non-adaptive models of depression: A comparison using register data on antidepressant medication during divorce.

    PubMed

    Rosenström, Tom; Fawcett, Tim W; Higginson, Andrew D; Metsä-Simola, Niina; Hagen, Edward H; Houston, Alasdair I; Martikainen, Pekka

    2017-01-01

    Divorce is associated with an increased probability of a depressive episode, but the causation of events remains unclear. Adaptive models of depression propose that depression is a social strategy in part, whereas non-adaptive models tend to propose a diathesis-stress mechanism. We compare an adaptive evolutionary model of depression to three alternative non-adaptive models with respect to their ability to explain the temporal pattern of depression around the time of divorce. Register-based data (304,112 individuals drawn from a random sample of 11% of Finnish people) on antidepressant purchases is used as a proxy for depression. This proxy affords an unprecedented temporal resolution (a 3-monthly prevalence estimates over 10 years) without any bias from non-compliance, and it can be linked with underlying episodes via a statistical model. The evolutionary-adaptation model (all time periods with risk of divorce are depressogenic) was the best quantitative description of the data. The non-adaptive stress-relief model (period before divorce is depressogenic and period afterwards is not) provided the second best quantitative description of the data. The peak-stress model (periods before and after divorce can be depressogenic) fit the data less well, and the stress-induction model (period following divorce is depressogenic and the preceding period is not) did not fit the data at all. The evolutionary model was the most detailed mechanistic description of the divorce-depression link among the models, and the best fit in terms of predicted curvature; thus, it offers most rigorous hypotheses for further study. The stress-relief model also fit very well and was the best model in a sensitivity analysis, encouraging development of more mechanistic models for that hypothesis.

  3. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  4. Specification, testing, and interpretation of gene-by-measured-environment interaction models in the presence of gene-environment correlation

    PubMed Central

    Rathouz, Paul J.; Van Hulle, Carol A.; Lee Rodgers, Joseph; Waldman, Irwin D.; Lahey, Benjamin B.

    2009-01-01

    Purcell (2002) proposed a bivariate biometric model for testing and quantifying the interaction between latent genetic influences and measured environments in the presence of gene-environment correlation. Purcell’s model extends the Cholesky model to include gene-environment interaction. We examine a number of closely-related alternative models that do not involve gene-environment interaction but which may fit the data as well Purcell’s model. Because failure to consider these alternatives could lead to spurious detection of gene-environment interaction, we propose alternative models for testing gene-environment interaction in the presence of gene-environment correlation, including one based on the correlated factors model. In addition, we note mathematical errors in the calculation of effect size via variance components in Purcell’s model. We propose a statistical method for deriving and interpreting variance decompositions that are true to the fitted model. PMID:18293078

  5. The use of random forests in modelling short-term air pollution effects based on traffic and meteorological conditions: A case study in Wrocław.

    PubMed

    Kamińska, Joanna A

    2018-07-01

    Random forests, an advanced data mining method, are used here to model the regression relationships between concentrations of the pollutants NO 2 , NO x and PM 2.5 , and nine variables describing meteorological conditions, temporal conditions and traffic flow. The study was based on hourly values of wind speed, wind direction, temperature, air pressure and relative humidity, temporal variables, and finally traffic flow, in the two years 2015 and 2016. An air quality measurement station was selected on a main road, located a short distance (40 m) from a large intersection equipped with a traffic flow measurement system. Nine different time subsets were defined, based among other things on the climatic conditions in Wrocław. An analysis was made of the fit of models created for those subsets, and of the importance of the predictors. Both the fit and the importance of particular predictors were found to be dependent on season. The best fit was obtained for models created for the six-month warm season (April-September) and for the summer season (June-August). The most important explanatory variable in the models of concentrations of nitrogen oxides was traffic flow, while in the case of PM 2.5 the most important were meteorological conditions, in particular temperature, wind speed and wind direction. Temporal variables (except for month in the case of PM 2.5 ) were found to have no significant effect on the concentrations of the studied pollutants. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  7. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  8. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  9. An energy budget agent-based model of earthworm populations and its application to study the effects of pesticides

    PubMed Central

    Johnston, A.S.A.; Hodson, M.E.; Thorbek, P.; Alvarez, T.; Sibly, R.M.

    2014-01-01

    Earthworms are important organisms in soil communities and so are used as model organisms in environmental risk assessments of chemicals. However current risk assessments of soil invertebrates are based on short-term laboratory studies, of limited ecological relevance, supplemented if necessary by site-specific field trials, which sometimes are challenging to apply across the whole agricultural landscape. Here, we investigate whether population responses to environmental stressors and pesticide exposure can be accurately predicted by combining energy budget and agent-based models (ABMs), based on knowledge of how individuals respond to their local circumstances. A simple energy budget model was implemented within each earthworm Eisenia fetida in the ABM, based on a priori parameter estimates. From broadly accepted physiological principles, simple algorithms specify how energy acquisition and expenditure drive life cycle processes. Each individual allocates energy between maintenance, growth and/or reproduction under varying conditions of food density, soil temperature and soil moisture. When simulating published experiments, good model fits were obtained to experimental data on individual growth, reproduction and starvation. Using the energy budget model as a platform we developed methods to identify which of the physiological parameters in the energy budget model (rates of ingestion, maintenance, growth or reproduction) are primarily affected by pesticide applications, producing four hypotheses about how toxicity acts. We tested these hypotheses by comparing model outputs with published toxicity data on the effects of copper oxychloride and chlorpyrifos on E. fetida. Both growth and reproduction were directly affected in experiments in which sufficient food was provided, whilst maintenance was targeted under food limitation. Although we only incorporate toxic effects at the individual level we show how ABMs can readily extrapolate to larger scales by providing good model fits to field population data. The ability of the presented model to fit the available field and laboratory data for E. fetida demonstrates the promise of the agent-based approach in ecology, by showing how biological knowledge can be used to make ecological inferences. Further work is required to extend the approach to populations of more ecologically relevant species studied at the field scale. Such a model could help extrapolate from laboratory to field conditions and from one set of field conditions to another or from species to species. PMID:25844009

  10. Modeling the kinetics of survival of Staphylococcus aureus in regional yogurt from goat's milk.

    PubMed

    Bednarko-Młynarczyk, E; Szteyn, J; Białobrzewski, I; Wiszniewska-Łaszczych, A; Liedtke, K

    2015-01-01

    The aim of this study was to determine the kinetics of the survival of the test strain of Staphylococcus aureus in the product investigated. Yogurt samples were contaminated with S. aure to an initial level of 10(3)-10(4) cfu/g. The samples were then stored at four temperatures: 4, 6, 20, 22°C. During storage, the number of S. aureus forming colonies in a gram of yogurt was determined every two hours. Based on the results of the analysis culture the curves of survival were plotted. Three primary models were selected to describe the kinetics of changes in the count of bacteria: Cole's model, a modified model of Gompertz and the model of Baranyi and Roberts. Analysis of the model fit carried out based on the average values of Pearson's correlation coefficient, between the modeled and measured values, showed that the Cole's model had the worst fit. The modified Gompertz model showed the count of S. aureus as a negative value. These drawbacks were not observed in the model of Baranyi and Roberts. For this reason, this model best reflects the kinetics of changes in the number of staphylococci in yogurt.

  11. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  12. Erroneous Arrhenius: Modified Arrhenius model best explains the temperature dependence of ectotherm fitness

    PubMed Central

    Knies, Jennifer L.; Kingsolver, Joel G.

    2013-01-01

    The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reaction(s) over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using datasets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range, and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics is rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence. PMID:20528477

  13. Erroneous Arrhenius: modified arrhenius model best explains the temperature dependence of ectotherm fitness.

    PubMed

    Knies, Jennifer L; Kingsolver, Joel G

    2010-08-01

    The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reactions over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using data sets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics are rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence.

  14. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli

    PubMed Central

    Lenski, Richard E.; Wiser, Michael J.; Ribeck, Noah; Blount, Zachary D.; Nahum, Joshua R.; Morris, J. Jeffrey; Zaman, Luis; Turner, Caroline B.; Wade, Brian D.; Maddamsetti, Rohan; Burmeister, Alita R.; Baird, Elizabeth J.; Bundy, Jay; Grant, Nkrumah A.; Card, Kyle J.; Rowles, Maia; Weatherspoon, Kiyana; Papoulis, Spiridon E.; Sullivan, Rachel; Clark, Colleen; Mulka, Joseph S.; Hajela, Neerja

    2015-01-01

    Many populations live in environments subject to frequent biotic and abiotic changes. Nonetheless, it is interesting to ask whether an evolving population's mean fitness can increase indefinitely, and potentially without any limit, even in a constant environment. A recent study showed that fitness trajectories of Escherichia coli populations over 50 000 generations were better described by a power-law model than by a hyperbolic model. According to the power-law model, the rate of fitness gain declines over time but fitness has no upper limit, whereas the hyperbolic model implies a hard limit. Here, we examine whether the previously estimated power-law model predicts the fitness trajectory for an additional 10 000 generations. To that end, we conducted more than 1100 new competitive fitness assays. Consistent with the previous study, the power-law model fits the new data better than the hyperbolic model. We also analysed the variability in fitness among populations, finding subtle, but significant, heterogeneity in mean fitness. Some, but not all, of this variation reflects differences in mutation rate that evolved over time. Taken together, our results imply that both adaptation and divergence can continue indefinitely—or at least for a long time—even in a constant environment. PMID:26674951

  15. Improving Kepler Pipeline Sensitivity with Pixel Response Function Photometry.

    NASA Astrophysics Data System (ADS)

    Morris, Robert L.; Bryson, Steve; Jenkins, Jon Michael; Smith, Jeffrey C

    2014-06-01

    We present the results of our investigation into the feasibility and expected benefits of implementing PRF-fitting photometry in the Kepler Science Processing Pipeline. The Kepler Pixel Response Function (PRF) describes the expected system response to a point source at infinity and includes the effects of the optical point spread function, the CCD detector responsivity function, and spacecraft pointing jitter. Planet detection in the Kepler pipeline is currently based on simple aperture photometry (SAP), which is most effective when applied to uncrowded bright stars. Its effectiveness diminishes rapidly as target brightness decreases relative to the effects of noise sources such as detector electronics, background stars, and image motion. In contrast, PRF photometry is based on fitting an explicit model of image formation to the data and naturally accounts for image motion and contributions of background stars. The key to obtaining high-quality photometry from PRF fitting is a high-quality model of the system's PRF, while the key to efficiently processing the large number of Kepler targets is an accurate catalog and accurate mapping of celestial coordinates onto the focal plane. If the CCD coordinates of stellar centroids are known a priori then the problem of PRF fitting becomes linear. A model of the Kepler PRF was constructed at the time of spacecraft commissioning by fitting piecewise polynomial surfaces to data from dithered full frame images. While this model accurately captured the initial state of the system, the PRF has evolved dynamically since then and has been seen to deviate significantly from the initial (static) model. We construct a dynamic PRF model which is then used to recover photometry for all targets of interest. Both simulation tests and results from Kepler flight data demonstrate the effectiveness of our approach. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.

  16. Conceptual Models and Theory-Embedded Principles on Effective Schooling.

    ERIC Educational Resources Information Center

    Scheerens, Jaap

    1997-01-01

    Reviews models and theories on effective schooling. Discusses four rationality-based organization theories and a fifth perspective, chaos theory, as applied to organizational functioning. Discusses theory-embedded principles flowing from these theories: proactive structuring, fit, market mechanisms, cybernetics, and self-organization. The…

  17. Electroweak precision observables and Higgs-boson signal strengths in the Standard Model and beyond: present and future

    DOE PAGES

    de Blas, J.; Ciuchini, M.; Franco, E.; ...

    2016-12-27

    We present results from a state-of-the-art fit of electroweak precision observables and Higgs-boson signal-strength measurements performed using 7 and 8 TeV data from the Large Hadron Collider. Based on the HEPfit package, our study updates the traditional fit of electroweak precision observables and extends it to include Higgs-boson measurements. As a result we obtain constraints on new physics corrections to both electroweak observables and Higgs-boson couplings. We present the projected accuracy of the fit taking into account the expected sensitivities at future colliders.

  18. Electroweak precision observables and Higgs-boson signal strengths in the Standard Model and beyond: present and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Blas, J.; Ciuchini, M.; Franco, E.

    We present results from a state-of-the-art fit of electroweak precision observables and Higgs-boson signal-strength measurements performed using 7 and 8 TeV data from the Large Hadron Collider. Based on the HEPfit package, our study updates the traditional fit of electroweak precision observables and extends it to include Higgs-boson measurements. As a result we obtain constraints on new physics corrections to both electroweak observables and Higgs-boson couplings. We present the projected accuracy of the fit taking into account the expected sensitivities at future colliders.

  19. Comparison of random regression models with Legendre polynomials and linear splines for production traits and somatic cell score of Canadian Holstein cows.

    PubMed

    Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G

    2008-09-01

    A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.

  20. Ionospheric Slant Total Electron Content Analysis Using Global Positioning System Based Estimation

    NASA Technical Reports Server (NTRS)

    Komjathy, Attila (Inventor); Mannucci, Anthony J. (Inventor); Sparks, Lawrence C. (Inventor)

    2017-01-01

    A method, system, apparatus, and computer program product provide the ability to analyze ionospheric slant total electron content (TEC) using global navigation satellite systems (GNSS)-based estimation. Slant TEC is estimated for a given set of raypath geometries by fitting historical GNSS data to a specified delay model. The accuracy of the specified delay model is estimated by computing delay estimate residuals and plotting a behavior of the delay estimate residuals. An ionospheric threat model is computed based on the specified delay model. Ionospheric grid delays (IGDs) and grid ionospheric vertical errors (GIVEs) are computed based on the ionospheric threat model.

  1. 3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.

    PubMed

    Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef

    2016-11-01

    In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.

  2. Bivariate copula in fitting rainfall data

    NASA Astrophysics Data System (ADS)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  3. Cultural Artifact Detection in Long Wave Infrared Imagery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are appliedmore » to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.« less

  4. Hypermutation signature reveals a slippage and realignment model of translesion synthesis by Rev3 polymerase in cisplatin-treated yeast.

    PubMed

    Segovia, Romulo; Shen, Yaoqing; Lujan, Scott A; Jones, Steven J M; Stirling, Peter C

    2017-03-07

    Gene-gene or gene-drug interactions are typically quantified using fitness as a readout because the data are continuous and easily measured in high throughput. However, to what extent fitness captures the range of other phenotypes that show synergistic effects is usually unknown. Using Saccharomyces cerevisiae and focusing on a matrix of DNA repair mutants and genotoxic drugs, we quantify 76 gene-drug interactions based on both mutation rate and fitness and find that these parameters are not connected. Independent of fitness defects, we identified six cases of synthetic hypermutation, where the combined effect of the drug and mutant on mutation rate was greater than predicted. One example occurred when yeast lacking RA D1 were exposed to cisplatin, and we characterized this interaction using whole-genome sequencing. Our sequencing results indicate mutagenesis by cisplatin in rad1 Δ cells appeared to depend almost entirely on interstrand cross-links at GpCpN motifs. Interestingly, our data suggest that the following base on the template strand dictates the addition of the mutated base. This result differs from cisplatin mutation signatures in XPF-deficient Caenorhabditis elegans and supports a model in which translesion synthesis polymerases perform a slippage and realignment extension across from the damaged base. Accordingly, DNA polymerase ζ activity was essential for mutagenesis in cisplatin-treated rad1 Δ cells. Together these data reveal the potential to gain new mechanistic insights from nonfitness measures of gene-drug interactions and extend the use of mutation accumulation and whole-genome sequencing analysis to define DNA repair mechanisms.

  5. A New Model for the World of Instructional Design: A New Model

    ERIC Educational Resources Information Center

    Isman, Aytekin; Caglar, Mehmet; Dabaj, Fahme; Ersozlu, Hatice

    2005-01-01

    Like all models, the new model is also based on a theoretical foundation; constructivism, which emphasis is placed on the learner or the student rather than the teacher or the instructor. Students learn by fitting new information together with what they already know. People learn best when they actively construct their own understanding. The new…

  6. Well-being, health and fitness of children who use wheelchairs: feasibility study protocol to develop child-centred 'keep-fit' exercise interventions.

    PubMed

    O'Brien, Thomas D; Noyes, Jane; Spencer, Llinos Haf; Kubis, Hans-Peter; Edwards, Rhiannon T; Bray, Nathan; Whitaker, Rhiannon

    2015-02-01

    To undertake the pre-clinical and modelling phases of the Medical Research Council complex intervention framework to underpin development of child-centred 'keep-fit', exercise and physical activity interventions for children and young people who use wheelchairs. Children who use wheelchairs face many barriers to participation in physical activity, which compromises fitness, obesity, well-being and health. 'Keep-fit' programmes that are child-centred and engaging are urgently required to enhance participation of disabled children and their families as part of a healthy lifestyle. Nurses will likely be important in promoting and monitoring 'keep-fit' intervention(s) when implemented in the community. Mixed-method (including economic analysis) feasibility study to capture child and family preferences and keep-fit needs and to determine outcome measures for a 'keep-fit' intervention. The study comprises three stages. Stage 1 includes a mixed-method systematic review of effectiveness, cost effectiveness and key stakeholder views and experiences of keep-fit interventions, followed by qualitative interviews with children, young people and their parents to explore preferences and motivations for physical activity. Stage 2 will identify standardized outcome measures and test their application with children who use wheelchairs to obtain baseline fitness data. Options for an exercise-based keep-fit intervention will then be designed based on Stage 1 and 2 findings. In stage 3, we will present intervention options for feedback and further refinement to children and parents/carers in focus groups. (Project funded October 2012). At completion, this study will lead to the design of the intervention and a protocol to test its efficacy. © 2014 John Wiley & Sons Ltd.

  7. A corkscrew model for dynamin constriction

    PubMed Central

    Mears, Jason A.; Ray, Pampa; Hinshaw, Jenny E.

    2007-01-01

    SUMMARY Numerous vesiculation processes throughout the eukaryotic cell are dependant on the protein dynamin, a large GTPase that constricts lipid bilayers. We have combined x-ray crystallography and cryo-electron microscopy (cryo-EM) data to generate a coherent model of dynamin-mediated membrane constriction. X-ray structures of mammalian GTPase and pleckstrin homology (PH) domains of dynamin were fit to cryo-EM structures of human ΔPRD dynamin helices bound to lipid in non-constricted and constricted states. Proteolysis and immunogold labeling experiments confirm the topology of dynamin domains predicted from the helical arrays. Based on the fitting, an observed twisting motion of the GTPase, middle and GTPase-effector domains coincides with conformational changes determined by cryo-EM. We propose a corkscrew model for dynamin constriction based on these motions and predict regions of sequence important for dynamin function as potential targets for future mutagenic and structural studies. PMID:17937909

  8. Fitting observed and theoretical choices - women's choices about prenatal diagnosis of Down syndrome.

    PubMed

    Seror, Valerie

    2008-05-01

    Choices regarding prenatal diagnosis of Down syndrome - the most frequent chromosomal defect - are particularly relevant to decision analysis, since women's decisions are based on the assessment of their risk of carrying a child with Down syndrome, and involve tradeoffs (giving birth to an affected child vs procedure-related miscarriage). The aim of this study, based on face-to-face interviews with 78 women aged 25-35 with prior experience of pregnancy, was to compare the women' expressed choices towards prenatal diagnosis with those derived from theoretical models of choice (expected utility theory, rank-dependent theory, and cumulative prospect theory). The main finding obtained in this study was that the cumulative prospect model fitted the observed choices best: both subjective transformation of probabilities and loss aversion, which are basic features of the cumulative prospect model, have to be taken into account to make the observed choices consistent with the theoretical ones.

  9. FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS.

    PubMed

    Hastings, Justine S; Shapiro, Jesse M

    2013-11-01

    We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat "gas money" as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series.

  10. FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS*

    PubMed Central

    Hastings, Justine S.; Shapiro, Jesse M.

    2015-01-01

    We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat “gas money” as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series. PMID:26937053

  11. Practical Consequences of Item Response Theory Model Misfit in the Context of Test Equating with Mixed-Format Test Data

    PubMed Central

    Zhao, Yue; Hambleton, Ronald K.

    2017-01-01

    In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The study investigated the practical consequences of IRT model misfit in examining the equating performance and the classification of examinees into performance categories in a simulation study that mimics a typical large-scale statewide assessment program with mixed-format test data. The simulation study was implemented by varying three factors, including choice of IRT model, amount of growth/change of examinees’ abilities between two adjacent administration years, and choice of IRT scaling methods. Findings indicated that the extent of significant consequences of model misfit varied over the choice of model and IRT scaling methods. In comparison with mean/sigma (MS) and Stocking and Lord characteristic curve (SL) methods, separate calibration with linking and fixed common item parameter (FCIP) procedure was more sensitive to model misfit and more robust against various amounts of ability shifts between two adjacent administrations regardless of model fit. SL was generally the least sensitive to model misfit in recovering equating conversion and MS was the least robust against ability shifts in recovering the equating conversion when a substantial degree of misfit was present. The key messages from the study are that practical ways are available to study model fit, and, model fit or misfit can have consequences that should be considered when choosing an IRT model. Not only does the study address the consequences of IRT model misfit, but also it is our hope to help researchers and practitioners find practical ways to study model fit and to investigate the validity of particular IRT models for achieving a specified purpose, to assure that the successful use of the IRT models are realized, and to improve the applications of IRT models with educational and psychological test data. PMID:28421011

  12. Accuracy of single-tooth restorations based on intraoral digital and conventional impressions in patients.

    PubMed

    Boeddinghaus, Moritz; Breloer, Eva Sabina; Rehmann, Peter; Wöstmann, Bernd

    2015-11-01

    The purpose of this clinical study was to compare the marginal fit of dental crowns based on three different intraoral digital and one conventional impression methods. Forty-nine teeth of altogether 24 patients were prepared to be treated with full-coverage restorations. Digital impressions were made using three intraoral scanners: Sirona CEREC AC Omnicam (OCam), Heraeus Cara TRIOS and 3M Lava True Definition (TDef). Furthermore, a gypsum model based on a conventional impression (EXA'lence, GC, Tokyo, Japan) was scanned with a standard laboratory scanner (3Shape D700). Based on the dataset obtained, four zirconia copings per tooth were produced. The marginal fit of the copings in the patient's mouth was assessed employing a replica technique. Overall, seven measurement copings did not fit and, therefore, could not be assessed. The marginal gap was 88 μm (68-136 μm) [median/interquartile range] for the TDef, 112 μm (94-149 μm) for the Cara TRIOS, 113 μm (81-157 μm) for the laboratory scanner and 149 μm (114-218 μm) for the OCam. There was a statistically significant difference between the OCam and the other groups (p < 0.05). Within the limitations of this study, it can be concluded that zirconia copings based on intraoral scans and a laboratory scans of a conventional model are comparable to one another with regard to their marginal fit. Regarding the results of this study, the digital intraoral impression can be considered as an alternative to a conventional impression with a consecutive digital workflow when the finish line is clearly visible and it is possible to keep it dry.

  13. Fitness cost: a bacteriological explanation for the demise of the first international methicillin-resistant Staphylococcus aureus epidemic.

    PubMed

    Nielsen, Karen L; Pedersen, Thomas M; Udekwu, Klas I; Petersen, Andreas; Skov, Robert L; Hansen, Lars H; Hughes, Diarmaid; Frimodt-Møller, Niels

    2012-06-01

    Denmark and several other countries experienced the first epidemic of methicillin-resistant Staphylococcus aureus (MRSA) during the period 1965-75, which was caused by multiresistant isolates of phage complex 83A. In Denmark these MRSA isolates disappeared almost completely, being replaced by other phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with various antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 2%-15% were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis found significantly independent negative correlations between fitness and the presence of mecA or streptomycin resistance. Mathematical modelling confirmed that fitness costs of the magnitude carried by these isolates could result in the disappearance of MRSA prevalence during a time span similar to that seen in Denmark. We propose a significant fitness cost of resistance as the main bacteriological explanation for the disappearance of the multiresistant complex 83A MRSA in Denmark following a reduction in antibiotic usage.

  14. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  15. FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems

    NASA Astrophysics Data System (ADS)

    Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.

    2016-12-01

    Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.

  16. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  17. Compact continuum brain model for human electroencephalogram

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Shin, H.-B.; Robinson, P. A.

    2007-12-01

    A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.

  18. Using Electrically-evoked Compound Action Potentials to Estimate Perceptive Levels in Experienced Adult Cochlear Implant Users.

    PubMed

    Joly, Charles-Alexandre; Péan, Vincent; Hermann, Ruben; Seldran, Fabien; Thai-Van, Hung; Truy, Eric

    2017-10-01

    The cochlear implant (CI) fitting level prediction accuracy of electrically-evoked compound action potential (ECAP) should be enhanced by the addition of demographic data in models. No accurate automated fitting of CI based on ECAP has yet been proposed. We recorded ECAP in 45 adults who had been using MED-EL CIs for more than 11 months and collected the most comfortable loudness level (MCL) used for CI fitting (prog-MCL), perception thresholds (meas-THR), and MCL (meas-MCL) measured with the stimulation used for ECAP recording. Linear mixed models taking into account cochlear site factors were computed to explain prog-MCL, meas-MCL, and meas-THR. Cochlear region and ECAP threshold were predictors of the three levels. In addition, significant predictors were the ECAP amplitude for the prog-MCL and the duration of deafness for the prog-MCL and the meas-THR. Estimations were more accurate for the meas-THR, then the meas-MCL, and finally the prog-MCL. These results show that 1) ECAP thresholds are more closely related to perception threshold than to comfort level, 2) predictions are more accurate when the inter-subject and cochlear regions variations are considered, and 3) differences between the stimulations used for ECAP recording and for CI fitting make it difficult to accurately predict the prog-MCL from the ECAP recording. Predicted prog-MCL could be used as bases for fitting but should be used with care to avoid any uncomfortable or painful stimulation.

  19. The Dutch-Flemish PROMIS Physical Function item bank exhibited strong psychometric properties in patients with chronic pain.

    PubMed

    Crins, Martine H P; Terwee, Caroline B; Klausch, Thomas; Smits, Niels; de Vet, Henrica C W; Westhovens, Rene; Cella, David; Cook, Karon F; Revicki, Dennis A; van Leeuwen, Jaap; Boers, Maarten; Dekker, Joost; Roorda, Leo D

    2017-07-01

    The objective of this study was to assess the psychometric properties of the Dutch-Flemish Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function item bank in Dutch patients with chronic pain. A bank of 121 items was administered to 1,247 Dutch patients with chronic pain. Unidimensionality was assessed by fitting a one-factor confirmatory factor analysis and evaluating resulting fit statistics. Items were calibrated with the graded response model and its fit was evaluated. Cross-cultural validity was assessed by testing items for differential item functioning (DIF) based on language (Dutch vs. English). Construct validity was evaluated by calculation correlations between scores on the Dutch-Flemish PROMIS Physical Function measure and scores on generic and disease-specific measures. Results supported the Dutch-Flemish PROMIS Physical Function item bank's unidimensionality (Comparative Fit Index = 0.976, Tucker Lewis Index = 0.976) and model fit. Item thresholds targeted a wide range of physical function construct (threshold-parameters range: -4.2 to 5.6). Cross-cultural validity was good as four items only showed DIF for language and their impact on item scores was minimal. Physical Function scores were strongly associated with scores on all other measures (all correlations ≤ -0.60 as expected). The Dutch-Flemish PROMIS Physical Function item bank exhibited good psychometric properties. Development of a computer adaptive test based on the large bank is warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less

Top