Science.gov

Sample records for adequate model fit

  1. Adequate mathematical modelling of environmental processes

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.

    2012-04-01

    In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same

  2. Are Physical Education Majors Models for Fitness?

    ERIC Educational Resources Information Center

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  3. Arabidopsis: an adequate model for dicot root systems?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In the search for answers to pressing root developmental genetic issues, plant science has turned to a small genome dicot plant (Arabidopsis) to be used as a model to study and use to develop hypotheses for testing other species. Through out the published research only three classes of root are des...

  4. Choosing an adequate FEM grid for global mantle convection modelling

    NASA Astrophysics Data System (ADS)

    Thieulot, Cedric

    2016-04-01

    Global numerical models of mantle convection are typically run on a grid which represents a hollow sphere. In the context of using the Finite Element method, there are many ways to discretise a hollow sphere by means of cuboids in a regular fashion (adaptive mesh refinement is here not considered). I will here focus on the following two: the cubed sphere [1], which is a quasi-uniform mapping of a cube to a sphere (considering both equidistant and equiangular projections), and the 12-block grid used for instance in CITCOM [2]. By means of simple experiments, I will show that at comparable resolutions (and all other things being equal), the 12-block grid is surprisingly vastly superior to the cubed-sphere grid, when used in combination with trilinear velocity - constant pressure elements, while being more difficult to build/implement. [1] C. Ronchi, R. Iacono, and P. S. Paolucci, The "Cubed Sphere": A New Method for the Solution of Partial Differential Equations in Spherical Geometry, Journal of Computational Physics, 124, p93-114 (1996). [2] S. Zhong and M.T. Zuber and L.N. Moresi and M. Gurnis, Role of temperature-dependent viscosity and surface plates in spherical shell models of mantle convection, Journal of Geophysical Research, 105 (B5), p 11,063-11,082 (2000).

  5. Fitting and Interpreting Occupancy Models

    PubMed Central

    Welsh, Alan H.; Lindenmayer, David B.; Donnelly, Christine F.

    2013-01-01

    We show that occupancy models are more difficult to fit than is generally appreciated because the estimating equations often have multiple solutions, including boundary estimates which produce fitted probabilities of zero or one. The estimates are unstable when the data are sparse, making them difficult to interpret, and, even in ideal situations, highly variable. As a consequence, making accurate inference is difficult. When abundance varies over sites (which is the general rule in ecology because we expect spatial variance in abundance) and detection depends on abundance, the standard analysis suffers bias (attenuation in detection, biased estimates of occupancy and potentially finding misleading relationships between occupancy and other covariates), asymmetric sampling distributions, and slow convergence of the sampling distributions to normality. The key result of this paper is that the biases are of similar magnitude to those obtained when we ignore non-detection entirely. The fact that abundance is subject to detection error and hence is not directly observable, means that we cannot tell when bias is present (or, equivalently, how large it is) and we cannot adjust for it. This implies that we cannot tell which fit is better: the fit from the occupancy model or the fit ignoring the possibility of detection error. Therefore trying to adjust occupancy models for non-detection can be as misleading as ignoring non-detection completely. Ignoring non-detection can actually be better than trying to adjust for it. PMID:23326323

  6. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  7. Total force fitness: the military family fitness model.

    PubMed

    Bowles, Stephen V; Pollock, Liz Davenport; Moore, Monique; Wadsworth, Shelley MacDermid; Cato, Colanda; Dekle, Judith Ward; Meyer, Sonia Wei; Shriver, Amber; Mueller, Bill; Stephens, Mark; Seidler, Dustin A; Sheldon, Joseph; Picano, James; Finch, Wanda; Morales, Ricardo; Blochberger, Sean; Kleiman, Matthew E; Thompson, Daniel; Bates, Mark J

    2015-03-01

    The military lifestyle can create formidable challenges for military families. This article describes the Military Family Fitness Model (MFFM), a comprehensive model aimed at enhancing family fitness and resilience across the life span. This model is intended for use by Service members, their families, leaders, and health care providers but also has broader applications for all families. The MFFM has three core components: (1) family demands, (2) resources (including individual resources, family resources, and external resources), and (3) family outcomes (including related metrics). The MFFM proposes that resources from the individual, family, and external areas promote fitness, bolster resilience, and foster well-being for the family. The MFFM highlights each resource level for the purpose of improving family fitness and resilience over time. The MFFM both builds on existing family strengths and encourages the development of new family strengths through resource-acquiring behaviors. The purpose of this article is to (1) expand the military's Total Force Fitness (TFF) intent as it relates to families and (2) offer a family fitness model. This article will summarize relevant evidence, provide supportive theory, describe the model, and proffer metrics that support the dimensions of this model.

  8. Total force fitness: the military family fitness model.

    PubMed

    Bowles, Stephen V; Pollock, Liz Davenport; Moore, Monique; Wadsworth, Shelley MacDermid; Cato, Colanda; Dekle, Judith Ward; Meyer, Sonia Wei; Shriver, Amber; Mueller, Bill; Stephens, Mark; Seidler, Dustin A; Sheldon, Joseph; Picano, James; Finch, Wanda; Morales, Ricardo; Blochberger, Sean; Kleiman, Matthew E; Thompson, Daniel; Bates, Mark J

    2015-03-01

    The military lifestyle can create formidable challenges for military families. This article describes the Military Family Fitness Model (MFFM), a comprehensive model aimed at enhancing family fitness and resilience across the life span. This model is intended for use by Service members, their families, leaders, and health care providers but also has broader applications for all families. The MFFM has three core components: (1) family demands, (2) resources (including individual resources, family resources, and external resources), and (3) family outcomes (including related metrics). The MFFM proposes that resources from the individual, family, and external areas promote fitness, bolster resilience, and foster well-being for the family. The MFFM highlights each resource level for the purpose of improving family fitness and resilience over time. The MFFM both builds on existing family strengths and encourages the development of new family strengths through resource-acquiring behaviors. The purpose of this article is to (1) expand the military's Total Force Fitness (TFF) intent as it relates to families and (2) offer a family fitness model. This article will summarize relevant evidence, provide supportive theory, describe the model, and proffer metrics that support the dimensions of this model. PMID:25735013

  9. Scaled models, scaled frequencies, and model fitting

    NASA Astrophysics Data System (ADS)

    Roxburgh, Ian W.

    2015-12-01

    I show that given a model star of mass M, radius R, and density profile ρ(x) [x = r/R], there exists a two parameter family of models with masses Mk, radii Rk, density profile ρk(x) = λρ(x) and frequencies νknℓ = λ1/2νnℓ, where λ,Rk/RA are scaling factors. These models have different internal structures, but all have the same value of separation ratios calculated at given radial orders n, and all exactly satisfy a frequency matching algorithm with an offset function determined as part of the fitting procedure. But they do not satisfy ratio matching at given frequencies nor phase shift matching. This illustrates that erroneous results may be obtained when model fitting with ratios at given n values or frequency matching. I give examples from scaled models and from non scaled evolutionary models.

  10. Coaches as Fitness Role Models

    ERIC Educational Resources Information Center

    Nichols, Randall; Zillifro, Traci D.; Nichols, Ronald; Hull, Ethan E.

    2012-01-01

    The lack of physical activity, low fitness levels, and elevated obesity rates as high as 32% of today's youth are well documented. Many strategies and grants have been developed at the national, regional, and local levels to help counteract these current trends. Strategies have been developed and implemented for schools, households (parents), and…

  11. Are population pharmacokinetic and/or pharmacodynamic models adequately evaluated? A survey of the literature from 2002 to 2004

    PubMed Central

    Brendel, Karl; Dartois, Céline; Comets, Emmanuelle; Lemenuel-Diot, Annabelle; Laveille, Christian; Tranchand, Brigitte; Girard, Pascal; Laffont, Céline M.; Mentré, France

    2007-01-01

    Purpose Model evaluation is an important issue in population analyses. We aimed to perform a systematic review of all population PK and/or PD analyses published between 2002 and 2004 to survey the current methods used to evaluate a model and to assess whether those models were adequately evaluated. Methods We selected 324 papers in MEDLINE using defined keywords and built a data abstraction form (DAF) composed of a checklist of items to extract the relevant information from these articles with respect to model evaluation. In the DAF, evaluation methods were divided into 3 subsections: basic internal methods (goodness-of-fit plots [GOF], uncertainty in parameter estimates and model sensitivity), advanced internal methods (data splitting, resampling techniques and Monte Carlo simulations) and external model evaluation. Results Basic internal evaluation was the most frequently described method in the reports: 65% of the models involved GOF evaluation. Standard errors or confidence intervals were reported for 50% of fixed effects but only 22% of random effects. Advanced internal methods were used in approximately 25% of models: data splitting was more often used than bootstrap and cross-validation; simulations were used in 6% of models to evaluate models by visual predictive check or by posterior predictive check. External evaluation was performed in only 7% of models. Conclusions Using the subjective synthesis of model evaluation for each paper, we judged models to be adequately evaluated in 28% of PK models and 26% of PD models. Basic internal evaluation was preferred to more advanced methods, probably because the former are performed easily with most software. We also noticed that when the aim of modelling was predictive, advanced internal methods or more stringent methods were more often used. PMID:17328581

  12. Sensitivity of Fit Indices to Model Misspecification and Model Types

    ERIC Educational Resources Information Center

    Fan, Xitao; Sivo, Stephen A.

    2007-01-01

    The search for cut-off criteria of fit indices for model fit evaluation (e.g., Hu & Bentler, 1999) assumes that these fit indices are sensitive to model misspecification, but not to different types of models. If fit indices were sensitive to different types of models that are misspecified to the same degree, it would be very difficult to establish…

  13. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  14. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  15. Contrast Gain Control Model Fits Masking Data

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  16. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  17. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  18. Fitness

    MedlinePlus

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  19. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  20. Raindrop size distribution: Fitting performance of common theoretical models

    NASA Astrophysics Data System (ADS)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  1. Modeling and Fitting Exoplanet Transit Light Curves

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, G. T.

    2013-01-01

    We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the

  2. Degeneracy and discreteness in cosmological model fitting

    NASA Astrophysics Data System (ADS)

    Teng, Huan-Yu; Huang, Yuan; Zhang, Tong-Jie

    2016-03-01

    We explore the problems of degeneracy and discreteness in the standard cosmological model (ΛCDM). We use the Observational Hubble Data (OHD) and the type Ia supernovae (SNe Ia) data to study this issue. In order to describe the discreteness in fitting of data, we define a factor G to test the influence from each single data point and analyze the goodness of G. Our results indicate that a higher absolute value of G shows a better capability of distinguishing models, which means the parameters are restricted into smaller confidence intervals with a larger figure of merit evaluation. Consequently, we claim that the factor G is an effective way of model differentiation when using different models to fit the observational data.

  3. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  4. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  5. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  6. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    ERIC Educational Resources Information Center

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  7. Determining Adequate Yearly Progress in a State Performance or Proficiency Index Model

    ERIC Educational Resources Information Center

    Erpenbach, William J.

    2009-01-01

    The purpose of this paper is to present an overview regarding how several states use a performance or proficiency index in their determination of adequate yearly progress (AYP) under the No Child Left Behind Act of 2001 (NCLB). Typically, indexes are based on one of two weighting schemes: (1) either they weight academic performance levels--also…

  8. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  9. Two algorithms for fitting constrained marginal models

    PubMed Central

    Evans, R.J.; Forcina, A.

    2013-01-01

    The two main algorithms that have been considered for fitting constrained marginal models to discrete data, one based on Lagrange multipliers and the other on a regression model, are studied in detail. It is shown that the updates produced by the two methods are identical, but that the Lagrangian method is more efficient in the case of identically distributed observations. A generalization is given of the regression algorithm for modelling the effect of exogenous individual-level covariates, a context in which the use of the Lagrangian algorithm would be infeasible for even moderate sample sizes. An extension of the method to likelihood-based estimation under L1-penalties is also considered. PMID:23794772

  10. Fitting and Modeling of AXAF Data with the ASC Fitting Application

    NASA Astrophysics Data System (ADS)

    Doe, S.; Ljungberg, M.; Siemiginowska, A.; Joye, W.

    The AXAF mission will provide X-ray data with unprecedented spatial and spectral resolution. Because of the high quality of these data, the AXAF Science Center will provide a new data analysis system--including a new fitting application. Our intent is to enable users to do fitting that is too awkward with, or beyond, the scope of existing astronomical fitting software. Our main goals are: 1) to take advantage of the full capabilities of the AXAF, we intend to provide a more sophisticated modeling capability (i.e., models that are $f(x,y,E,t)$, models to simulate the response of AXAF instruments, and models that enable ``joint-mode'' fitting, i.e., combined spatial-spectral or spectral-temporal fitting); and 2) to provide users with a wide variety of models, optimization methods, and fit statistics. In this paper, we discuss the use of an object-oriented approach in our implementation, the current features of the fitting application, and the features scheduled to be added in the coming year of development. Current features include: an interactive, command-line interface; a modeling language, which allows users to build models from arithmetic combinations of base functions; a suite of optimization and fit statistics; the ability to perform fits to multiple data sets simultaneously; and, an interface with SM and SAOtng to plot or image data, models, and/or residuals from a fit. We currently provide a modeling capability in one or two dimensions, and have recently made an effort to perform spectral fitting in a manner similar to XSPEC. We also allow users to dynamically link the fitting application to their own algorithms. Our goals for the coming year include incorporating the XSPEC model library as a subset of models available in the application, enabling ``joint-mode'' analysis and adding support for new algorithms.

  11. The best-fit universe. [cosmological models

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  12. Adopting adequate leaching requirement for practical response models of basil to salinity

    NASA Astrophysics Data System (ADS)

    Babazadeh, Hossein; Tabrizi, Mahdi Sarai; Darvishi, Hossein Hassanpour

    2016-07-01

    Several mathematical models are being used for assessing plant response to salinity of the root zone. Objectives of this study included quantifying the yield salinity threshold value of basil plants to irrigation water salinity and investigating the possibilities of using irrigation water salinity instead of saturated extract salinity in the available mathematical models for estimating yield. To achieve the above objectives, an extensive greenhouse experiment was conducted with 13 irrigation water salinity levels, namely 1.175 dS m-1 (control treatment) and 1.8 to 10 dS m-1. The result indicated that, among these models, the modified discount model (one of the most famous root water uptake model which is based on statistics) produced more accurate results in simulating the basil yield reduction function using irrigation water salinities. Overall the statistical model of Steppuhn et al. on the modified discount model and the math-empirical model of van Genuchten and Hoffman provided the best results. In general, all of the statistical models produced very similar results and their results were better than math-empirical models. It was also concluded that if enough leaching was present, there was no significant difference between the soil salinity saturated extract models and the models using irrigation water salinity.

  13. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  14. Adequate model complexity for scenario analysis of VOC stripping in a trickling filter.

    PubMed

    Vanhooren, H; Verbrugge, T; Boeije, G; Demey, D; Vanrolleghem, P A

    2001-01-01

    Two models describing the stripping of volatile organic contaminants (VOCs) in an industrial trickling filter system are developed. The aim of the models is to investigate the effect of different operating conditions (VOC loads and air flow rates) on the efficiency of VOC stripping and the resulting concentrations in the gas and liquid phases. The first model uses the same principles as the steady-state non-equilibrium activated sludge model Simple Treat, in combination with an existing biofilm model. The second model is a simple mass balance based model only incorporating air and liquid and thus neglecting biofilm effects. In a first approach, the first model was incorporated in a five-layer hydrodynamic model of the trickling filter, using the carrier material design specifications for porosity, water hold-up and specific surface area. A tracer test with lithium was used to validate this approach, and the gas mixing in the filters was studied using continuous CO2 and O2 measurements. With the tracer test results, the biodegradation model was adapted, and it became clear that biodegradation and adsorption to solids can be neglected. On this basis, a simple dynamic mass balance model was built. Simulations with this model reveal that changing the air flow rate in the trickling filter system has little effect on the VOC stripping efficiency at steady state. However, immediately after an air flow rate change, quite high flux and concentration peaks of VOCs can be expected. These phenomena are of major importance for the design of an off-gas treatment facility. PMID:11385860

  15. Effectiveness of the Sport Education Fitness Model on Fitness Levels, Knowledge, and Physical Activity

    ERIC Educational Resources Information Center

    Pritchard, Tony; Hansen, Andrew; Scarboro, Shot; Melnic, Irina

    2015-01-01

    The purpose of this study was to investigate changes in fitness levels, content knowledge, physical activity levels, and participants' perceptions following the implementation of the sport education fitness model (SEFM) at a high school. Thirty-two high school students participated in 20 lessons using the SEFM. Aerobic capacity, muscular…

  16. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher's Geometric Model?

    PubMed

    Blanquart, François; Bataillon, Thomas

    2016-06-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes-including the mean and SD of selection and epistasis coefficients-it was often unable to explain the full structure of fitness landscapes.

  17. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Obreschkow, D.

    2015-09-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.

  18. Anatomical features for the adequate choice of experimental animal models in biomedicine: I. Fishes.

    PubMed

    D'Angelo, Livia; Lossi, Laura; Merighi, Adalberto; de Girolamo, Paolo

    2016-05-01

    Fish constitute the oldest and most diverse class of vertebrates, and are widely used in basic research due to a number of advantages (e.g., rapid development ex-utero, large-scale genetic screening of human disease). They represent excellent experimental models for addressing studies on development, morphology, physiology and behavior function in other related species, as well as informative analysis of conservation and diversity. Although less complex, fish share many anatomical and physiological features with mammals, including humans, which make them an important complement to research in mammalian models. In this review we describe and compare the most relevant anatomical features of the most used teleostean species in research, to be taken into consideration when selecting an animal model: zebrafish (Danio rerio), medaka (Oryzias latypes), the turquoise killifish (Nothobranchius furzeri), and goldfish (Carassius auratus). Zebrafish and medaka are the mainstream models for genetic manipulability and studies on developmental biology; the turquoise killifish is an excellent model for aging research; goldfish has been largely employed for neuroendocrine studies. PMID:26925824

  19. Anatomical features for the adequate choice of experimental animal models in biomedicine: I. Fishes.

    PubMed

    D'Angelo, Livia; Lossi, Laura; Merighi, Adalberto; de Girolamo, Paolo

    2016-05-01

    Fish constitute the oldest and most diverse class of vertebrates, and are widely used in basic research due to a number of advantages (e.g., rapid development ex-utero, large-scale genetic screening of human disease). They represent excellent experimental models for addressing studies on development, morphology, physiology and behavior function in other related species, as well as informative analysis of conservation and diversity. Although less complex, fish share many anatomical and physiological features with mammals, including humans, which make them an important complement to research in mammalian models. In this review we describe and compare the most relevant anatomical features of the most used teleostean species in research, to be taken into consideration when selecting an animal model: zebrafish (Danio rerio), medaka (Oryzias latypes), the turquoise killifish (Nothobranchius furzeri), and goldfish (Carassius auratus). Zebrafish and medaka are the mainstream models for genetic manipulability and studies on developmental biology; the turquoise killifish is an excellent model for aging research; goldfish has been largely employed for neuroendocrine studies.

  20. Operational Realities: Obtaining adequate drivers and inputs for radiation belt models

    NASA Astrophysics Data System (ADS)

    Friedel, R. H. W.; Chen, Y.; Tu, W.; Cunningham, G.; Reeves, G. D.; Lichtenberger, J.

    2014-12-01

    Recent developments in 3D diffusion codes for the high energy electron radiation belt have shown that the model representation of microphysical processes in terms of diffusion coefficients, capturing radial, energy and pitch-angle diffusion (including mixed diffusion terms) is quite capable of capturing the dynamics and physics of the radiation belt system, while remaining computationally tractable; making these codes an ideal candidate for operational application. However, we hold that the major obstacle to a realistic application of such codes for now- or forecasting is our insufficient knowledge of drivers and inputs to these codes - rather than any additional improved physics in the codes. These include the specification of the initial conditions, knowledge of the background plasma distribution, the global distribution of waves, the low-energy boundary condition and the outer boundary condition. In this talk we will discuss realistic and affordable strategies of specifying these inputs through the use of proxies, ground based measurement techniques and data assimilative methods; present examples of where this is already possible (outer boundary and global chorus wave and plasma density specification), and outline where additional effort is needed. Finally we present an example of using such realistic model drivers in a state-of-the-art 3D diffusion code which demonstrates a remarkable ability of such codes to reproduce the observed dynamics - by simply using the existing physics in the code but providing the "correct" drivers and boundary conditions.

  1. F-specific RNA bacteriophages are adequate model organisms for enteric viruses in fresh water.

    PubMed Central

    Havelaar, A H; van Olphen, M; Drost, Y C

    1993-01-01

    Culturable enteroviruses were detected by applying concentration techniques and by inoculating the concentrates on the BGM cell line. Samples were obtained from a wide variety of environments, including raw sewage, secondary effluent, coagulated effluent, chlorinated and UV-irradiated effluents, river water, coagulated river water, and lake water. The virus concentrations varied widely between 0.001 and 570/liter. The same cell line also supported growth of reoviruses, which were abundant in winter (up to 95% of the viruses detected) and scarce in summer (less than 15%). The concentrations of three groups of model organisms in relation to virus concentrations were also studied. The concentrations of bacteria (thermotolerant coliforms and fecal streptococci) were significantly correlated with virus concentrations in river water and coagulated secondary effluent, but were relatively low in disinfected effluents and relatively high in surface water open to nonhuman fecal pollution. The concentrations of F-specific RNA bacteriophages (FRNA phages) were highly correlated with virus concentrations in all environments studied except raw and biologically treated sewage. Numerical relationships were consistent over the whole range of environments; the regression equations for FRNA phages on viruses in river water and lake water were statistically equivalent. These relationships support the possibility that enteric virus concentrations can be predicted from FRNA phage data. PMID:8215367

  2. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  3. Goodness of Model-Data Fit and Invariant Measurement

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Perkins, Aminah

    2013-01-01

    In this commentary, Englehard and Perkins remark that Maydeu-Olivares has presented a framework for evaluating the goodness of model-data fit for item response theory (IRT) models and correctly points out that overall goodness-of-fit evaluations of IRT models and data are not generally explored within most applications in educational and…

  4. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  5. HDFITS: Porting the FITS data model to HDF5

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  6. Consequences of Fitting Nonidentified Latent Class Models

    ERIC Educational Resources Information Center

    Abar, Beau; Loken, Eric

    2012-01-01

    Latent class models are becoming more popular in behavioral research. When models with a large number of latent classes relative to the number of manifest indicators are estimated, researchers must consider the possibility that the model is not identified. It is not enough to determine that the model has positive degrees of freedom. A well-known…

  7. Fitting Value-Added Models in R

    ERIC Educational Resources Information Center

    Doran, Harold C.; Lockwood, J. R.

    2006-01-01

    Value-added models of student achievement have received widespread attention in light of the current test-based accountability movement. These models use longitudinal growth modeling techniques to identify effective schools or teachers based upon the results of changes in student achievement test scores. Given their increasing popularity, this…

  8. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  9. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, W. J.

    2014-05-01

    Wim J. de Lange, Geert F. Prinsen, Jacco H. Hoogewoud, Ab A Veldhuizen, Joachim Hunink, Erik F.W. Ruijgh, Timo Kroon Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses

  10. Involving regional expertise in nationwide modeling for adequate prediction of climate change effects on different demands for fresh water

    NASA Astrophysics Data System (ADS)

    de Lange, Wim; Prinsen, Geert.; Hoogewoud, Jacco; Veldhuizen, Ab; Ruijgh, Erik; Kroon, Timo

    2013-04-01

    Nationwide modeling aims to produce a balanced distribution of climate change effects (e.g. harm on crops) and possible compensation (e.g. volume fresh water) based on consistent calculation. The present work is based on the Netherlands Hydrological Instrument (NHI, www.nhi.nu), which is a national, integrated, hydrological model that simulates distribution, flow and storage of all water in the surface water and groundwater systems. The instrument is developed to assess the impact on water use on land-surface (sprinkling crops, drinking water) and in surface water (navigation, cooling). The regional expertise involved in the development of NHI come from all parties involved in the use, production and management of water, such as waterboards, drinking water supply companies, provinces, ngo's, and so on. Adequate prediction implies that the model computes changes in the order of magnitude that is relevant to the effects. In scenarios related to drought, adequate prediction applies to the water demand and the hydrological effects during average, dry, very dry and extremely dry periods. The NHI acts as a part of the so-called Deltamodel (www.deltamodel.nl), which aims to predict effects and compensating measures of climate change both on safety against flooding and on water shortage during drought. To assess the effects, a limited number of well-defined scenarios is used within the Deltamodel. The effects on demand of fresh water consist of an increase of the demand e.g. for surface water level control to prevent dike burst, for flushing salt in ditches, for sprinkling of crops, for preserving wet nature and so on. Many of the effects are dealt with? by regional and local parties. Therefore, these parties have large interest in the outcome of the scenario analyses. They are participating in the assessment of the NHI previous to the start of the analyses. Regional expertise is welcomed in the calibration phase of NHI. It aims to reduce uncertainties by improving the

  11. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    PubMed Central

    Velasco, Jose; Pizarro, Daniel; Macias-Guarasa, Javier

    2012-01-01

    This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP) strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies. PMID:23202021

  12. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  13. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  14. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  15. Fitting ARMA Time Series by Structural Equation Models.

    ERIC Educational Resources Information Center

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  16. A New Tradition To Fit the Model.

    ERIC Educational Resources Information Center

    Darnell, D. Roe; Rosenthal, Donna McCrohan

    2001-01-01

    Discusses Cerro Coso Community College in Ridgecrest (California), where 80-85 of all local jobs are with one employer, the China Lake Naval Air Weapons Station (NAWS). States that massive layoffs at NAWS inspired creative ways of rethinking the community college model at Cerro Coso, such as creating the nation's first computer graphics imagery…

  17. Transit Model Fitting in the Kepler Science Operations Center Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, C. J.; Jenkins, J. M.; Quintana, E. V.; Rowe, J. F.; Seader, S. E.; Tenenbaum, P.; Twicken, J. D.

    2012-05-01

    We describe the algorithm and performance of the transit model fitting of the Kepler Science Operations Center (SOC) Pipeline. Light curves of long cadence targets are subjected to the Transiting Planet Search (TPS) component of the Kepler SOC Pipeline. Those targets for which a Threshold Crossing Event (TCE) is generated in the transit search are subsequently processed in the Data Validation (DV) component. The light curves may span one or more Kepler observing quarters, and data may not be available for any given target in all quarters. Transit model parameters are fitted in DV to transit-like signatures in the light curves of target stars with TCEs. The fitted parameters are used to generate a predicted light curve based on the transit model. The residual flux time series of the target star, with the predicted light curve removed, is fed back to TPS to search for additional TCEs. The iterative process of transit model fitting and transiting planet search continues until no TCE is generated from the residual flux time series or a planet candidate limit is reached. The transit model includes five parameters to be fitted: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. The initial values of the fit parameters are determined from the TCE values provided by TPS. A limb darkening model is included in the transit model to generate the predicted light curve. The transit model fitting results are used in the diagnostic tests in DV, such as the centroid motion test, eclipsing binary discrimination tests, etc., which helps to validate planet candidates and identify false positive detections. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  18. Automatic fitting of spiking neuron models to electrophysiological recordings.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  19. A goodness-of-fit test for occupancy models with correlated within-season revisits

    USGS Publications Warehouse

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  20. AnnAGNPS model as a potential tool for seeking adequate agriculture land management in Navarre (Spain)

    NASA Astrophysics Data System (ADS)

    Chahor, Y.; Giménez, R.; Casalí, J.

    2012-04-01

    Nowadays agricultural activities face two important challenges. They must be efficient from an economic point of view but with low environment impacts (soil erosion risk, nutrient/pesticide contamination, greenhouse gases emissions, etc.). In this context, hydrological and erosion models appear as remarkable tools when looking for the best management practices. AnnAGNPS (Annualized Agricultural Non Point Source Pollution) is a continuous simulation watershed-scale model that estimates yield and transit of surface water, sediment, nutrients, and pesticides through a watershed. This model has been successfully evaluated -in terms of annual runoff and sediment yield- in a small (around 200 ha) agricultural watershed located in central eastern part of Navarre (Spain), named Latxaga. The watershed is under a humid Sub-Mediterranean climate. It is cultivated almost entirely with winter cereals (wheat and barley) following conventional soil and tillage management practices. The remaining 15% of the watershed is covered by urban and shrub areas. The aim of this work is to evaluate in Latxga watershed the effect of potential and realistic changes in land use and management on surface runoff and sediment yield by using AnnAGNPS. Six years (2003 - 2008) of daily climate data were considered in the simulation. This dataset is the same used in the model evaluation previously made. Six different scenarios regarding soil use and management were considered: i) 60% cereals25% sunflower; ii) 60% cereals, 25% rapeseed; iii) 60% cereals, 25% legumes; iv) 60% cereals, 25% sunflower + rapeseed+ legumes, in equal parts; v) cereals, and alternatively different amount of shrubs (from 20% to 100% ); vi) only cereal but under different combinations of conventional tillage and no-tillage management. Overall, no significant differences in runoff generation were observed with the exception of scenario iii (in which legume is the main alternative crops), whit a slight increase in predicted

  1. Goodness of Fit Criteria in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    Several goodness of fit (GOF) criteria have been developed to assist the researcher in interpreting structural equation models. However, the determination of GOF for structural equation models is not as straightforward as that for other statistical approaches in multivariate procedures. The four GOF criteria used across the commonly used…

  2. Twitter classification model: the ABC of two million fitness tweets.

    PubMed

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment. PMID:24073182

  3. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies.

  4. A Model-Fitting Approach to Characterizing Polymer Decomposition Kinetics

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-07-20

    The use of isoconversional, sometimes called model-free, kinetic analysis methods have recently gained favor in the thermal analysis community. Although these methods are very useful and instructive, the conclusion that model fitting is a poor approach is largely due to improper use of the model fitting approach, such as fitting each heating rate separately. The current paper shows the ability of model fitting to correlate reaction data over very wide time-temperature regimes, including simultaneous fitting of isothermal and constant heating rate data. Recently published data on cellulose pyrolysis by Capart et al. (TCA, 2004) with a combination of an autocatalytic primary reaction and an nth-order char pyrolysis reaction is given as one example. Fits for thermal decomposition of Estane, Viton-A, and Kel-F over very wide ranges of heating rates is also presented. The Kel-F required two parallel reactions--one describing a small, early decomposition process, and a second autocatalytic reaction describing the bulk of pyrolysis. Viton-A and Estane also required two parallel reactions for primary pyrolysis, with the first Viton-A reaction also being a minor, early process. In addition, the yield of residue from these two polymers depends on the heating rate. This is an example of a competitive reaction between volatilization and char formation, which violates the basic tenet of the isoconversional approach and is an example of why it has limitations. Although more complicated models have been used in the literature for this type of process, we described our data well with a simple addition to the standard model in which the char yield is a function of the logarithm of the heating rate.

  5. Time-domain fitting of battery electrochemical impedance models

    NASA Astrophysics Data System (ADS)

    Alavi, S. M. M.; Birkl, C. R.; Howey, D. A.

    2015-08-01

    Electrochemical impedance spectroscopy (EIS) is an effective technique for diagnosing the behaviour of electrochemical devices such as batteries and fuel cells, usually by fitting data to an equivalent circuit model (ECM). The common approach in the laboratory is to measure the impedance spectrum of a cell in the frequency domain using a single sine sweep signal, then fit the ECM parameters in the frequency domain. This paper focuses instead on estimation of the ECM parameters directly from time-domain data. This may be advantageous for parameter estimation in practical applications such as automotive systems including battery-powered vehicles, where the data may be heavily corrupted by noise. The proposed methodology is based on the simplified refined instrumental variable for continuous-time fractional systems method ('srivcf'), provided by the Crone toolbox [1,2], combined with gradient-based optimisation to estimate the order of the fractional term in the ECM. The approach was tested first on synthetic data and then on real data measured from a 26650 lithium-ion iron phosphate cell with low-cost equipment. The resulting Nyquist plots from the time-domain fitted models match the impedance spectrum closely (much more accurately than when a Randles model is assumed), and the fitted parameters as separately determined through a laboratory potentiostat with frequency domain fitting match to within 13%.

  6. Learning local objective functions for robust face model fitting.

    PubMed

    Wimmer, Matthias; Stulp, Freek; Pietzsch, Sylvia; Radig, Bernd

    2008-08-01

    Model-based techniques have proven to be successful in interpreting the large amount of information contained in images. Associated fitting algorithms search for the global optimum of an objective function, which should correspond to the best model fit in a given image. Although fitting algorithms have been the subject of intensive research and evaluation, the objective function is usually designed ad hoc, based on implicit and domain-dependent knowledge. In this article, we address the root of the problem by learning more robust objective functions. First, we formulate a set of desirable properties for objective functions and give a concrete example function that has these properties. Then, we propose a novel approach that learns an objective function from training data generated by manual image annotations and this ideal objective function. In this approach, critical decisions such as feature selection are automated, and the remaining manual steps hardly require domain-dependent knowledge. Furthermore, an extensive empirical evaluation demonstrates that the obtained objective functions yield more robustness. Learned objective functions enable fitting algorithms to determine the best model fit more accurately than with designed objective functions. PMID:18566491

  7. On the accuracy and fitting of transversely isotropic material models.

    PubMed

    Feng, Yuan; Okamoto, Ruth J; Genin, Guy M; Bayly, Philip V

    2016-08-01

    Fiber reinforced structures are central to the form and function of biological tissues. Hyperelastic, transversely isotropic material models are used widely in the modeling and simulation of such tissues. Many of the most widely used models involve strain energy functions that include one or both pseudo-invariants (I4 or I5) to incorporate energy stored in the fibers. In a previous study we showed that both of these invariants must be included in the strain energy function if the material model is to reduce correctly to the well-known framework of transversely isotropic linear elasticity in the limit of small deformations. Even with such a model, fitting of parameters is a challenge. Here, by evaluating the relative roles of I4 and I5 in the responses to simple loadings, we identify loading scenarios in which previous models accounting for only one of these invariants can be expected to provide accurate estimation of material response, and identify mechanical tests that have special utility for fitting of transversely isotropic constitutive models. Results provide guidance for fitting of transversely isotropic constitutive models and for interpretation of the predictions of these models.

  8. Multidimensional Rasch Model Information-Based Fit Index Accuracy

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Wolfe, Edward W.

    2013-01-01

    Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…

  9. Fuzzy Partition Models for Fitting a Set of Partitions.

    ERIC Educational Resources Information Center

    Gordon, A. D.; Vichi, M.

    2001-01-01

    Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)

  10. The Gold Medal Fitness Program: A Model for Teacher Change

    ERIC Educational Resources Information Center

    Wright, Jan; Konza, Deslea; Hearne, Doug; Okely, Tony

    2008-01-01

    Background: Following the 2000 Sydney Olympics, the NSW Premier, Mr Bob Carr, launched a school-based initiative in NSW government primary schools called the "Gold Medal Fitness Program" to encourage children to be fitter and more active. The Program was introduced into schools through a model of professional development, "Quality Teaching and…

  11. Statistical assessment of model fit for synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.; O'Sullivan, Joseph A.

    2001-08-01

    Parametric approaches to problems of inference from observed data often rely on assumed probabilistic models for the data which may be based on knowledge of the physics of the data acquisition. Given a rich enough collection of sample data, the validity of those assumed models can be assessed in a statistical hypothesis testing framework using any of a number of goodness-of-fit tests developed over the last hundred years for this purpose. Such assessments can be used both to compare alternate models for observed data and to help determine the conditions under which a given model breaks down. We apply three such methods, the (chi) 2 test of Karl Pearson, Kolmogorov's goodness-of-fit test, and the D'Agostino-Pearson test for normality, to quantify how well the data fit various models for synthetic aperture radar (SAR) images. The results of these tests are used to compare a conditionally Gaussian model for complex-valued SAR pixel values, a conditionally log-normal model for SAR pixel magnitudes, and a conditionally normal model for SAR pixel quarter-power values. Sample data for these tests are drawn from the publicly released MSTAR dataset.

  12. Thermodynamic study and modelling of iron-based melts for adequate prediction of modern ladle metallurgy processes

    NASA Astrophysics Data System (ADS)

    Zaitsev, A. I.; Rodionova, I. G.; Shaposhnikov, N. G.; Zemlyanko, O. A.; Karamisheva, N. A.

    2008-02-01

    The representation of iron-based melts as associated liquids have been developed basing on the detail experimental investigation and analysis of available data on their thermodynamic properties and phase equilibria. It has allowed, for the first time, to interpret adequately the reactivity of the earth metals in the iron-based melts and to predict with high precision the reactions of metal refinement and non-metallic inclusions modifying in modern ladle metallurgy.

  13. Broadband distortion modeling in Lyman-α forest BAO fitting

    NASA Astrophysics Data System (ADS)

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; Arinyo-i-Prats, Andreu; Busca, Nicolás G.; Miralda-Escudé, Jordi; Slosar, Anže; Font-Ribera, Andreu; Margala, Daniel; Schneider, Donald P.; Vazquez, Jose A.

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift zsimeq 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter bF and the redshift-space distortion parameter βF for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination bF(1+βF) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39+0.11 +0.24 +0.38-0.10 -0.19 -0.28 and bF(1+βF)=-0.374+0.007 +0.013 +0.020-0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.

  14. Assessing the fit of site-occupancy models

    USGS Publications Warehouse

    MacKenzie, D.I.; Bailey, L.L.

    2004-01-01

    Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.

  15. Receptor-level interrelationships of amino acids and the adequate amino acid type hormones in Tetrahymena: a receptor evolution model.

    PubMed

    Csaba, G; Darvas, Z

    1986-01-01

    Histidine stimulates the phagocytosis of Tetrahymena to the same extent as histamine, and also stimulates its division, which histamine does not. Tyrosine and diiodotyrosine equally stimulate the growth of the Tetrahymena. Both amino acids inhibit the characteristic influence of the adequate amino acid hormone when added to Tetrahymena culture 72 h in advance of it. Primary interaction with diiodotyrosine and tyrosine notably increases the cellular growth rate. Histamine has a similar, although less notable effect than histidine. In the light of these experimental observations there is reason to postulate that the receptors of the amino acid hormones have developed from amino acid receptors.

  16. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  17. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    NASA Technical Reports Server (NTRS)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  18. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    ERIC Educational Resources Information Center

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  19. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  20. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  1. Equilibrium Distribution of Mutators in the Single Fitness Peak Model

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel; Deeds, Eric J.; Shakhnovich, Eugene I.

    2003-09-01

    This Letter develops an analytically tractable model for determining the equilibrium distribution of mismatch repair deficient strains in unicellular populations. The approach is based on the single fitness peak model, which has been used in Eigen’s quasispecies equations in order to understand various aspects of evolutionary dynamics. As with the quasispecies model, our model for mutator-nonmutator equilibrium undergoes a phase transition in the limit of infinite sequence length. This “repair catas­trophe” occurs at a critical repair error probability of ɛr=Lvia/L, where Lvia denotes the length of the genome controlling viability, while L denotes the overall length of the genome. The repair catastrophe therefore occurs when the repair error probability exceeds the fraction of deleterious mutations. Our model also gives a quantitative estimate for the equilibrium fraction of mutators in Escherichia coli.

  2. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  3. Testing goodness of fit of parametric models for censored data.

    PubMed

    Nysen, Ruth; Aerts, Marc; Faes, Christel

    2012-09-20

    We propose and study a goodness-of-fit test for left-censored, right-censored, and interval-censored data assuming random censorship. Main motivation comes from dietary exposure assessment in chemical risk assessment, where the determination of an appropriate distribution for concentration data is of major importance. We base the new goodness-of-fit test procedure proposed in this paper on the order selection test. As part of the testing procedure, we extend the null model to a series of nested alternative models for censored data. Then, we use a modified AIC model selection to select the best model to describe the data. If a model with one or more extra parameters is selected, then we reject the null hypothesis. As an alternative to the use of the asymptotic null distribution of the test statistic, we define a bootstrap-based procedure. We illustrate the applicability of the test procedure on data of cadmium concentrations and on data from the Signal Tandmobiel study and demonstrate its performance characteristics through simulation studies. PMID:22714389

  4. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    PubMed

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. PMID:25680684

  5. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    PubMed

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns.

  6. A goodness-of-fit test for occupancy models with correlated within-season revisits.

    PubMed

    Wright, Wilson J; Irvine, Kathryn M; Rodhouse, Thomas J

    2016-08-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodness-of-fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie-Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie-Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  7. Rapid world modeling: Fitting range data to geometric primitives

    SciTech Connect

    Feddema, J.; Little, C.

    1996-12-31

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE`s waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data.

  8. Toward an adequate mathematical model of mental space: conscious/unconscious dynamics on m-adic trees.

    PubMed

    Khrennikov, Andrei Yu

    2007-01-01

    We try to perform geometrization of cognitive science and psychology by representing information states of cognitive systems by points of mental space given by a hierarchic m-adic tree. Associations are represented by balls and ideas by collections of balls. We consider dynamics of ideas based on lifting of dynamics of mental points. We apply our dynamical model for modeling of flows of unconscious and conscious information in the human brain. In a series of models, Models 1-3, we consider cognitive systems with increasing complexity of psychological behavior determined by structure of flows of associations and ideas.

  9. Effect of the Number of Variables on Measures of Fit in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Kenny, David A.; McCoach, D. Betsy

    2003-01-01

    Used three approaches to understand the effect of the number of variables in the model on model fit in structural equation modeling through computer simulation. Developed a simple formula for the theoretical value of the comparative fit index. (SLD)

  10. A Commentary on the Relationship between Model Fit and Saturated Path Models in Structural Equation Modeling Applications

    ERIC Educational Resources Information Center

    Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi

    2013-01-01

    The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…

  11. Assessing Model Data Fit of Unidimensional Item Response Theory Models in Simulated Data

    ERIC Educational Resources Information Center

    Kose, Ibrahim Alper

    2014-01-01

    The purpose of this paper is to give an example of how to assess the model-data fit of unidimensional IRT models in simulated data. Also, the present research aims to explain the importance of fit and the consequences of misfit by using simulated data sets. Responses of 1000 examinees to a dichotomously scoring 20 item test were simulated with 25…

  12. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  13. HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting

    NASA Astrophysics Data System (ADS)

    Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni

    2016-06-01

    HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.

  14. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%–3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  15. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  16. Anatomical features for an adequate choice of experimental animal model in biomedicine: II. Small laboratory rodents, rabbit, and pig.

    PubMed

    Lossi, Laura; D'Angelo, Livia; De Girolamo, Paolo; Merighi, Adalberto

    2016-03-01

    The anatomical features distinctive to each of the very large array of species used in today's biomedical research must be born in mind when considering the correct choice of animal model(s), particularly when translational research is concerned. In this paper we take into consideration and discuss the most important anatomical and histological features of the commonest species of laboratory rodents (rat, mouse, guinea pig, hamster, and gerbil), rabbit, and pig related to their importance for applied research.

  17. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  18. The FIT Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros

    2010-09-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  19. Simple Ecohydrological Models: Is Average Root Zone Soil Moisture an Adequate Driver in the Functions for Evaporation and Assimilation?

    NASA Astrophysics Data System (ADS)

    Kurc, S. A.; Small, E. E.

    2007-12-01

    Dryland ecosystems are typically characterized by low annual precipitation, much of which is delivered in the form of small rainfall events that may only wet the top portion of the root zone. In these areas, evapotranspiration (ET) is limited by the availability of soil moisture rather than by atmospheric demand, i.e. ET << potential ET. Likewise, when optimal temperatures and soil nutrients are not limiting, the uptake of carbon by vegetation via photosynthesis, i.e. assimilation, is also limited by the availability of soil moisture. Though soil moisture is largely depth dependent, only average root zone soil moisture is used in typical simple models of ecohydrological processes. Here, we show that in semiarid grassland and shrubland, the surface soil layer is the primary source of water for ET, at least throughout the monsoon season. Conversely, we show that only large precipitation events (or series of small events) generate enough soil moisture below the influence of atmospheric demand to trigger carbon assimilation in these dryland ecosystems. From this we hypothesize that a realistic representation of ecohydrological processes in semiarid areas can not be made solely using average root zone soil moisture. In this study we utilize records of ET, assimilation, and soil moisture at several depths collected during 3 summer monsoons at the Sevilleta National Wildlife Refuge in central New Mexico using eddy covariance methods. Additionally we employ a simple bucket model of ecohydrological processes (e.g. Rodriguez-Iturbe et al. 1999, Daly et al. 2004) driven by average root zone soil moisture. We compare bucket model predictions of ET and assimilation to the actual data records. We show that (1) bucket model predictions of ET lack the dynamic temporal variability of actual observations, (2) declines in ET following peaks are significantly steeper in observed than in predicted times series of ET, and (3) peaks in bucket model predictions of assimilation occur

  20. Fitting of Parametric Building Models to Oblique Aerial Images

    NASA Astrophysics Data System (ADS)

    Panday, U. S.; Gerke, M.

    2011-09-01

    In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of - 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for completeness of

  1. Evaluating Potential P-gp Substrates: Main Aspects to Choose the Adequate Permeability Model for Assessing Gastrointestinal Drug Absorption.

    PubMed

    da Silva Junior, João Batista; Dezani, Thaisa Marinho; Dezani, André Bersani; dos Reis Serra, Cristina Helena

    2015-01-01

    The success of an oral drug route administration depends on many factors that interfere in its bioavailability, therapeutic efficacy and clinical safety. In human cells, ATP-dependent efflux transporter proteins, such as P-glycoprotein (P-gp), BCRP and MRP2, reduce the absorption of drugs. A tiered approach chosen to evaluate drugs as substrates or inhibitors of efflux pumps, particularly P-gp, should be carefully selected, since each study method has advantages and intrinsic limitations to their processes. Depending on the adopted study conditions, the results may not correspond to the real characteristics of the drug regarding to its modulation by specific efflux proteins. This mini-review aims at summarizing the role of P-gp in the drugs oral absorption and correlating some of the most used permeability methods to determine the drug condition as P-gp substrate. Studies about P-gp have shown that it is a dynamic protein, facilitating secretion of endogenous compounds, as aldosterone, and protecting cells against xenobiotics. Different efflux assays are employed to evaluate drugs as P-gp substrates. In an initial planning, MDCK-MDR1 tend to be the chosen method for efflux studies due its ability of express P-gp, followed by studies conducted in Caco-2 models. However, it is necessary to evaluate the advantages and disadvantages of each method to generate sound results and to set the correlation in vitro x in situ x in vivo. PMID:25963568

  2. Fitting optimum order of Markov chain models for daily rainfall occurrences in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Deni, Sayang Mohd; Jemain, Abdul Aziz; Ibrahim, Kamarulzaman

    2009-06-01

    The analysis of the daily rainfall occurrence behavior is becoming more important, particularly in water-related sectors. Many studies have identified a more comprehensive pattern of the daily rainfall behavior based on the Markov chain models. One of the aims in fitting the Markov chain models of various orders to the daily rainfall occurrence is to determine the optimum order. In this study, the optimum order of the Markov chain models for a 5-day sequence will be examined in each of the 18 rainfall stations in Peninsular Malaysia, which have been selected based on the availability of the data, using the Akaike’s (AIC) and Bayesian information criteria (BIC). The identification of the most appropriate order in describing the distribution of the wet (dry) spells for each of the rainfall stations is obtained using the Kolmogorov-Smirnov goodness-of-fit test. It is found that the optimum order varies according to the levels of threshold used (e.g., either 0.1 or 10.0 mm), the locations of the region and the types of monsoon seasons. At most stations, the Markov chain models of a higher order are found to be optimum for rainfall occurrence during the northeast monsoon season for both levels of threshold. However, it is generally found that regardless of the monsoon seasons, the first-order model is optimum for the northwestern and eastern regions of the peninsula when the level of thresholds of 10.0 mm is considered. The analysis indicates that the first order of the Markov chain model is found to be most appropriate for describing the distribution of wet spells, whereas the higher-order models are found to be adequate for the dry spells in most of the rainfall stations for both threshold levels and monsoon seasons.

  3. Simulations of Statistical Model Fits to RHIC Data

    NASA Astrophysics Data System (ADS)

    Llope, W. J.

    2013-04-01

    The application of statistical model fits to experimentally measured particle multiplicity ratios allows inferences of the average values of temperatures, T, baryochemical potentials, μB, and other quantities at chemical freeze-out. The location of the boundary between the hadronic and partonic regions in the (μB,T) phase diagram, and the possible existence of a critical point, remains largely speculative. The search for a critical point using the moments of the particle multiplicity distributions in tightly centrality constrained event samples makes the tacit assumption that the variances in the (μB,T) values in these samples is sufficiently small to tightly localize the events in the phase diagram. This and other aspects were explored in simulations by coupling the UrQMD transport model to the statistical model code Thermus. The phase diagram trajectories of individual events versus the time in fm/c was calculated versus the centrality and beam energy. The variances of the (μB,T) values at freeze-out, even in narrow centrality bins, are seen to be relatively large. This suggests that a new way to constrain the events on the phase diagram may lead to more sensitive searches for the possible critical point.

  4. Statistical modelling of network panel data: goodness of fit.

    PubMed

    Schweinberger, Michael

    2012-05-01

    Networks of relationships between individuals influence individual and collective outcomes and are therefore of interest in social psychology, sociology, the health sciences, and other fields. We consider network panel data, a common form of longitudinal network data. In the framework of estimating functions, which includes the method of moments as well as the method of maximum likelihood, we propose score-type tests. The score-type tests share with other score-type tests, including the classic goodness-of-fit test of Pearson, the property that the score-type tests are based on comparing the observed value of a function of the data to values predicted by a model. The score-type tests are most useful in forward model selection and as tests of homogeneity assumptions, and possess substantial computational advantages. We derive one-step estimators which are useful as starting values of parameters in forward model selection and therefore complement the usefulness of the score-type tests. The finite-sample behaviour of the score-type tests is studied by Monte Carlo simulation and compared to t-type tests.

  5. Caloric curves fitted by polytropic distributions in the HMF model

    NASA Astrophysics Data System (ADS)

    Campa, Alessandro; Chavanis, Pierre-Henri

    2013-04-01

    We perform direct numerical simulations of the Hamiltonian mean field (HMF) model starting from non-magnetized initial conditions with a velocity distribution that is (i) Gaussian; (ii) semi-elliptical, and (iii) waterbag. Below a critical energy E c , depending on the initial condition, this distribution is Vlasov dynamically unstable. The system undergoes a process of violent relaxation and quickly reaches a quasi-stationary state (QSS). We find that the distribution function of this QSS can be conveniently fitted by a polytrope with index (i) n = 2; (ii) n = 1; and (iii) n = 1/2. Using the values of these indices, we are able to determine the physical caloric curve T kin ( E) and explain the negative kinetic specific heat region C kin = dE/ d T kin < 0 observed in the numerical simulations. At low energies, we find that the system has a "core-halo" structure. The core corresponds to the pure polytrope discussed above but it is now surrounded by a halo of particles. In case (iii), we recover the "uniform" core-halo structure previously found by Pakter and Levin [Phys. Rev. Lett. 106, 200603 (2011)]. We also consider unsteady initial conditions with magnetization M 0 = 1 and isotropic waterbag velocity distribution and report the complex dynamics of the system creating phase space holes and dense filaments. We show that the kinetic caloric curve is approximately constant, corresponding to a polytrope with index n 0 ≃ 3.56 (we also mention the presence of an unexpected hump). Finally, we consider the collisional evolution of an initially Vlasov stable distribution, and show that the time-evolving distribution function f( θ,v,t) can be fitted by a sequence of polytropic distributions with a time-dependent index n( t) both in the non-magnetized and magnetized regimes. These numerical results show that polytropic distributions (also called Tsallis distributions) provide in many cases a good fit of the QSSs. They may even be the rule rather than the exception

  6. Modelling age and secular differences in fitness between basketball players.

    PubMed

    Drinkwater, Eric J; Hopkins, Will G; McKenna, Michael J; Hunt, Patrick H; Pyne, David B

    2007-06-01

    Concerns about the value of physical testing and apparently declining test performance in junior basketball players prompted this retrospective study of trends in anthropometric and fitness test scores related to recruitment age and recruitment year. The participants were 1011 females and 1087 males entering Basketball Australia's State and National programmes (1862 and 236 players, respectively). Players were tested on 2.6 +/- 2.0 (mean +/- s) occasions over 0.8 +/- 1.0 year. Test scores were adjusted to recruitment age (14-19 years) and recruitment year (1996-2003) using mixed modelling. Effects were estimated by log transformation and expressed as standardized (Cohen) differences in means. National players scored more favourably than State players on all tests, with the differences being generally small (standardized differences, 0.2-0.6) or moderate (0.6-1.2). On all tests, males scored more favourably than females, with large standardized differences (>1.2). Athletes entering at age 16 performed at least moderately better than athletes entering at age 14 on most tests (standardized differences, 0.7-2.1), but test scores often plateaued or began to deteriorate at around 17 years. Some fitness scores deteriorated over the 8-year period, most notably a moderate increase in sprint time and moderate (National male) to large (National female) declines in shuttle run performance. Variation in test scores between National players was generally less than that between State players (ratio of standard deviations, 0.83-1.18). More favourable means and lower variability in athletes of a higher standard highlight the potential utility of these tests in junior basketball programmes, although secular declines should be a major concern of Australian basketball coaches.

  7. RNA Virus Evolution via a Fitness-Space Model

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev S.; Levine, Herbert; Kessler, David A.

    1996-06-01

    We present a mean-field theory for the evolution of RNA virus populations. The theory operates with a distribution of the population in a one-dimensional fitness space, and is valid for sufficiently smooth fitness landscapes. Our approach explains naturally the recent experimental observation [I. S. Novella et al., Proc. Natl. Acad. Sci. U.S.A. 92, 5841-5844 (1995)] of two distinct stages in the growth of virus fitness.

  8. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.

  9. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  10. Modelling microbial dechlorination of trichloroethene: investigating the trade-off between quality of fit and parameter reliability.

    PubMed

    Kandris, K; Antoniou, K; Pantazidou, M; Mamais, D

    2015-03-01

    This work puts forth a heuristic approach for investigating compromises between quality of fit and parameter reliability for the Monod-type kinetics employed to model microbial reductive dechlorination of trichloroethene. The methodology is demonstrated with three models of increasing fidelity and complexity. Model parameters were estimated with a stochastic global optimization algorithm, using scarce and inherently noisy experimental data from a mixed anaerobic microbial culture, which dechlorinated trichloroethene to ethene completely. Parameter reliability of each model was assessed using a Monte Carlo technique. Finally, an alternate quantity of applied interest was evaluated in order to assist with model discrimination. Results from the application of our approach suggest that the modeler should examine the implementation of conceptually simple models, even if they are a crude abstraction of reality, as they can be computationally less demanding and adequately accurate when model performance is assessed with criteria of applied interest, such as chloroethene elimination time.

  11. A Comparison of Four Estimators of a Population Measure of Model Fit in Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Zhang, Wei

    2008-01-01

    A major issue in the utilization of covariance structure analysis is model fit evaluation. Recent years have witnessed increasing interest in various test statistics and so-called fit indexes, most of which are actually based on or closely related to F[subscript 0], a measure of model fit in the population. This study aims to provide a systematic…

  12. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having various conditions…

  13. Corrections to the paper {open_quotes}fitting the armitage-doll model to radiation-exposed cohorts and implications for population cancer risks{close_quotes}

    SciTech Connect

    Little, M.P.; Hawkins, M.M.; Charles, M.W.; Hildreth, N.G.

    1994-01-01

    A recent paper analyzed patterns of cancer in the Japanese atomic bomb survivors and three other groups exposed to radiation by fitting the so-called multistage model of Armitage and Doll. The paper concluded that the incidence of solid cancer could be described adequately by a model in which up to two stages affected by radiation were assumed but that the data for leukemia within the bomb survivors might not be so well fitted. This was in part because of a failure to account for the observed linear-quadratic dose response that has been observed in the Japanese cohort. It has recently come to our attention that there was a mistake in the fits of the model with two adjacent radiation-affected stages, whereby the quadratic coefficient in dose was being set to zero in all the fits. This paper provides corrections in the calculations for the model and discusses the results.

  14. Lithologic prediction from the stratal architecture of Plio-Pleistocene Gulf of Mexico: Are the eustatic depositional systems tract models adequate

    SciTech Connect

    Butler, M.L.; Self, G.A. )

    1991-03-01

    Climatic/eustatic cycles of the Plio-Pleistocene have been defined in the northern Gulf of Mexico and precisely tied to their associated sequences and lithologies by means of graphic correlation. This framework has provided the data necessary for a detailed empirical evaluation of the eustatic depositional systems tract models. The key to this evaluation is a eustatic sea-level curve derived from fossil and isotope data. A curve of this type has been defined for several sequences. Using this eustatic curve the actual lithofacies and position of the various systems tracts were directly compared to those predicted by the models. The evaluation of the data with respect to eustatic sea level yielded conclusions that are significantly different from those predicted by the models. The evaluation of the data with respect to eustatic sea level yielded conclusions that are significantly different from those predicted by the model. The most significant of these differences are: (1) significant amounts of sand were deposited in deep water during transgressive and highstand intervals; (2) the observed vertical succession of eustatic depositional systems tracts within a given sequence are transgressive, highstand, and lowstand, and (3) factors other than eustacy have been the dominant influence on facies distribution within the Plio-Pleistocene sequences studied. These results demonstrate that depositional systems tracts and internal facies distribution could not be adequately described by a single model. Therefore, sequence stratigraphic analysis should be empirically based and conducted within the context of the basin, instead of being model driven.

  15. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    ERIC Educational Resources Information Center

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  16. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  17. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  18. A Technique for Estimating Distinctive Asperity Source Models by Waveform Fitting

    NASA Astrophysics Data System (ADS)

    Matsushima, S.; Kawase, H.; Sato, T.; Graves, R. W.

    2001-12-01

    For predicting near fault strong motion, it is important to adequately evaluate the heterogeneity of the slip distribution of the source rupture process as well as the effects of the complex subsurface geology. Since the characteristics of pulse waves derived from forward rupture directivity effects are significantly affected by the size and the slip velocity function of the asperities, it is necessary to evaluate these parameters accurately (Matsushima and Kawase, 1999). In this study, we developed a technique for estimating rupture process assuming distinctive asperities by waveform fitting. In order to take into account of the 3-D subsurface geology in the Green?s functions, we used 3-D reciprocal Green?s functions (RGFs) calculated using the methodology by Graves and Wald (2001). We assumed that the fault geometry and the hypocenter was given, and that the asperity to be estimated was rectangular and on the fault plane. We also assumed that the slip is concentrated only on the asperity. The idea of this technique was as follows. First we calculated strong motions at observation sites using the RGFs for given range of parameters. Then we searched for the best fitting case by grid search technique (Sato et al., 1998). There were eight parameters, which were, location of asperity on the fault plane (X0, Y0), size of asperity (L, W), amplitude (Vd), duration (td), and decay shape parameter (α ) of the slip velocity function, and rake angle (λ ). We assumed that the rise time of the slip velocity function was 0.06 seconds and decays proportional to exp (-α t). The initiation point of the asperity was the closest point to the hypocenter. Numerical experiments showed that we can resolve the asperity model fairly well with good stability. We are planning to extend this technique to multiple asperities and to estimate asperity models for actual earthquakes.

  19. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  20. Cardiorespiratory Fitness. Role Modeling by P.E. Instructors.

    ERIC Educational Resources Information Center

    Whitley, Jim D.; And Others

    1988-01-01

    A survey determining the extent to which high school physical education teachers offered cardiorespiratory instruction found that more teachers than not regularly provided such instruction, with female teachers more likely to offer instruction than males. Physical fitness levels of the teachers did not appear to affect the amount of instruction…

  1. A Comparison of Model-Data Fit for Parametric and Nonparametric Item Response Theory Models Using Ordinal-Level Ratings

    ERIC Educational Resources Information Center

    Dyehouse, Melissa A.

    2009-01-01

    This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…

  2. The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.

    2006-01-01

    This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…

  3. Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo

    Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.

  4. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    ERIC Educational Resources Information Center

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days. It is…

  5. Aiming towards improved flood forecasting: Identification of an adequate model structure for a semi-arid and data-scarce region

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2015-04-01

    A lot of effort has already been put into the development of forecasting systems to warn people of approaching flood events. Such systems, however, are influenced by various sources of uncertainty which constrain the skill of forecasts. The main goal of this study is the identification, quantification and reduction of uncertainties to provide improved early warnings with adequate lead times in a data-scarce region with strong seasonality of the hydrological regime. This includes the setup of hydrological models and post-processing of simulation results by mathematical means such as data assimilation. The focus area is the Jaguaribe watershed in northeastern Brazil. The region is characterized by a seasonal climate with strong inter-annual variation and recurrent droughts. To ensure a secure water supply also during the dry season several thousand small and some large reservoirs have been constructed. On the other hand, floods caused by heavy rain events are an issue as well. This topic, however, so far has hardly been considered by the scientific community and until today no flood forecasting system exists for that region. To identify the most appropriate model structure for the catchment the process-based hydrological model for semi-arid environments WASA was implemented into the eco-hydrological simulation environment ECHSE. The environment consists of a generic part providing data types and simulation methods, and a problem-specific part where the user can implement different model formulations. This provides the possibility to test various process realisations under consistent input and output data structures. The most appropriate model structure can then be determined by statistical means such as Bayesian model averaging. Subsequently, forecast results may be updated by post-processing and/or data assimilation. Furthermore, methods of data fusion can be used to combine measurements of different quality and resolution, such as in-situ and remotely sensed data

  6. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  7. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  8. Performance of the Generalized S-X[Superscript 2] Item Fit Index for Polytomous IRT Models

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2008-01-01

    Orlando and Thissen's S-X[superscript 2] item fit index has performed better than traditional item fit statistics such as Yen' s Q[subscript 1] and McKinley and Mill' s G[superscript 2] for dichotomous item response theory (IRT) models. This study extends the utility of S-X[superscript 2] to polytomous IRT models, including the generalized partial…

  9. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  10. TRANSIT MODEL FITTING IN THE KEPLER SCIENCE OPERATIONS CENTER PIPELINE: NEW FEATURES AND PERFORMANCE

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, C. J.; Jenkins, J. M.; Quintana, E. V.; Rowe, J. F.; Seader, S. E.; Tenenbaum, P.; Twicken, J. D.

    2013-10-01

    We describe new transit model fitting features and performance of the latest release (9.1, July 2013) of the Kepler Science Operations Center (SOC) Pipeline. The targets for which a Threshold Crossing Event (TCE) is generated in the Transiting Planet Search (TPS) component of the pipeline are subsequently processed in the Data Validation (DV) component. Transit model parameters are fitted in DV to transit-like signatures in the light curves of the targets with TCEs. The transit model fitting results are used in diagnostic tests in DV, which help to validate planet candidates and identify false positive detections. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. Light curves for many targets do not contain enough information to uniquely determine the impact parameter, which results in poor convergence performance of the fitter. In the latest release of the Kepler SOC pipeline, a reduced parameter fit is included in DV: the impact parameter is set to a fixed value and the four remaining parameters are fitted. The standard transit model fit is implemented after a series of reduced parameter fits in which the impact parameter is varied between 0 and 1. Initial values for the standard transit model fit parameters are determined by the reduced parameter fit with the minimum chi-square metric. With reduced parameter fits, the robustness of the transit model fit is improved significantly. Diagnostic plots of the chi-square metrics and reduced parameter fit results illustrate how the fitted parameters vary as a function of impact parameter. Essentially, a family of transiting planet characteristics is determined in DV for each Pipeline TCE. Transit model fitting performance of release 9.1 of the Kepler SOC pipeline is demonstrated with the results of the processing of 16 quarters of flight data

  11. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  12. Examining the factor structure and convergent and discriminant validity of the Levenson self-report psychopathy scale: is the two-factor model the best fitting model?

    PubMed

    Salekin, Randall T; Chen, Debra R; Sellbom, Martin; Lester, Whitney S; MacDougall, Emily

    2014-07-01

    The Levenson, Kiehl, and Fitzpatrick (1995) Self-Report Psychopathy Scale (LSRP) was introduced in the mid-1990s as a brief measure of psychopathy and has since gained considerable popularity. Despite its attractiveness as a brief psychopathy tool, the LSRP has received limited research regarding its factor structure and convergent and discriminant validity. The present study examined the construct validity of the LSRP, testing both its factor structure and the convergent and discriminant validity. Using a community sample of 1,257 undergraduates (869 females; 378 males), we tested whether a 1-, 2-, or 3-factor model best fit the data and examined the links between the resultant factor structures and external correlates. Confirmatory factor analysis (CFA) findings revealed a 3-factor model best matched the data, followed by an adequate-fitting original 2-factor model. Next, comparisons were made regarding the convergent and discriminant validity of the competing 2- and 3-factor models. Findings showed the LSRP traditional primary and secondary factors had meaningful relations with extratest variables such as neuroticism, stress tolerance, and lack of empathy. The 3-factor model showed particular problems with the Callousness scale. These findings underscore the importance of examining not only CFA fit statistics but also convergent and discriminant validity when testing factor structure models. The current findings suggest that the 2-factor model might still be the best way to interpret the LSRP. PMID:24773039

  13. Assessing Fit of Cognitive Diagnostic Models: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Almond, Russell G.

    2007-01-01

    A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

  14. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  15. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  16. The gradient function as an exploratory goodness-of-fit assessment of the random-effects distribution in mixed models.

    PubMed

    Verbeke, Geert; Molenberghs, Geert

    2013-07-01

    Inference in mixed models is often based on the marginal distribution obtained from integrating out random effects over a pre-specified, often parametric, distribution. In this paper, we present the so-called gradient function as a simple graphical exploratory diagnostic tool to assess whether the assumed random-effects distribution produces an adequate fit to the data, in terms of marginal likelihood. The method does not require any calculations in addition to the computations needed to fit the model, and can be applied to a wide range of mixed models (linear, generalized linear, non-linear), with univariate as well as multivariate random effects, as long as the distribution for the outcomes conditional on the random effects is correctly specified. In case of model misspecification, the gradient function gives an important, albeit informal, indication on how the model can be improved in terms of random-effects distribution. The diagnostic value of the gradient function is extensively illustrated using some simulated examples, as well as in the analysis of a real longitudinal study with binary outcome values.

  17. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness.

  18. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness. PMID:25034338

  19. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  20. The Relation among Fit Indexes, Power, and Sample Size in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Kim, Kevin H.

    2005-01-01

    The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…

  1. Note: curve fit models for atomic force microscopy cantilever calibration in water.

    PubMed

    Kennedy, Scott J; Cole, Daniel G; Clark, Robert L

    2011-11-01

    Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%.

  2. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  3. Modeling chromatic instrumental effects for a better model fitting of optical interferometric data

    NASA Astrophysics Data System (ADS)

    Tallon, M.; Tallon-Bosc, I.; Chesneau, O.; Dessart, L.

    2014-07-01

    Current interferometers often collect data simultaneously in many spectral channels by using dispersed fringes. Such polychromatic data provide powerful insights in various physical properties, where the observed objects show particular spectral features. Furthermore, one can measure spectral differential visibilities that do not directly depend on any calibration by a reference star. But such observations may be sensitive to instrumental artifacts that must be taken into account in order to fully exploit the polychromatic information of interferometric data. As a specimen, we consider here an observation of P Cygni with the VEGA visible combiner on CHARA interferometer. Indeed, although P Cygni is particularly well modeled by the radiative transfer code CMFGEN, we observe questionable discrepancies between expected and actual interferometric data. The problem is to determine their origin and disentangle possible instrumental effects from the astrophysical information. By using an expanded model fitting, which includes several instrumental features, we show that the differential visibilities are well explained by instrumental effects that could be otherwise attributed to the object. Although this approach leads to more reliable results, it assumes a fit specific to a particular instrument, and makes it more difficult to develop a generic model fitting independent of any instrument.

  4. Fringe Fitting

    NASA Astrophysics Data System (ADS)

    Cotton, W. D.

    Fringe Fitting Theory; Correlator Model Delay Errors; Fringe Fitting Techniques; Baseline; Baseline with Closure Constraints; Global; Solution Interval; Calibration Sources; Source Structure; Phase Referencing; Multi-band Data; Phase-Cals; Multi- vs. Single-band Delay; Sidebands; Filtering; Establishing a Common Reference Antenna; Smoothing and Interpolating Solutions; Bandwidth Synthesis; Weights; Polarization; Fringe Fitting Practice; Phase Slopes in Time and Frequency; Phase-Cals; Sidebands; Delay and Rate Fits; Signal-to-Noise Ratios; Delay and Rate Windows; Details of Global Fringe Fitting; Multi- and Single-band Delays; Phase-Cal Errors; Calibrator Sources; Solution Interval; Weights; Source Model; Suggested Procedure; Bandwidth Synthesis

  5. Diploid biological evolution models with general smooth fitness landscapes and recombination.

    PubMed

    Saakian, David B; Kirakosyan, Zara; Hu, Chin-Kun

    2008-06-01

    Using a Hamilton-Jacobi equation approach, we obtain analytic equations for steady-state population distributions and mean fitness functions for Crow-Kimura and Eigen-type diploid biological evolution models with general smooth hypergeometric fitness landscapes. Our numerical solutions of diploid biological evolution models confirm the analytic equations obtained. We also study the parallel diploid model for the simple case of recombination and calculate the variance of distribution, which is consistent with numerical results. PMID:18643300

  6. An Experimentally Determined Evolutionary Model Dramatically Improves Phylogenetic Fit

    PubMed Central

    Bloom, Jesse D.

    2014-01-01

    All modern approaches to molecular phylogenetics require a quantitative model for how genes evolve. Unfortunately, existing evolutionary models do not realistically represent the site-heterogeneous selection that governs actual sequence change. Attempts to remedy this problem have involved augmenting these models with a burgeoning number of free parameters. Here, I demonstrate an alternative: Experimental determination of a parameter-free evolutionary model via mutagenesis, functional selection, and deep sequencing. Using this strategy, I create an evolutionary model for influenza nucleoprotein that describes the gene phylogeny far better than existing models with dozens or even hundreds of free parameters. Emerging high-throughput experimental strategies such as the one employed here provide fundamentally new information that has the potential to transform the sensitivity of phylogenetic and genetic analyses. PMID:24859245

  7. Fitting Partially Nonlinear Random Coefficient Models as SEMs

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Cudeck, Robert; du Toit, Stephen H. C.

    2006-01-01

    The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model…

  8. Fitting and Testing Conditional Multinormal Partial Credit Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item scores,…

  9. Fitting degradation of shoreline scarps by a nonlinear diffusion model

    USGS Publications Warehouse

    Andrews, D.J.; Buckna, R.C.

    1987-01-01

    The diffusion model of degradation of topographic features is a promising means by which vertical offsets on Holocene faults might be dated. In order to calibrate the method, we have examined present-day profiles of wave-cut shoreline scarps of late Pleistocene lakes Bonneville and Lahontan. A table is included that allows easy application of the model to scarps with simple initial shape. -from Authors

  10. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  11. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2010-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  12. Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.

    2014-06-01

    We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and

  13. A no-scale inflationary model to fit them all

    SciTech Connect

    Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu

    2014-08-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic m{sup 2} φ{sup 2}/2 potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio r that is highly consistent with the Starobinsky R +R{sup 2} model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction n{sub s} ≅ 0.96.

  14. Fitness model for the Italian interbank money market

    NASA Astrophysics Data System (ADS)

    de Masi, G.; Iori, G.; Caldarelli, G.

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto’s law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  15. Fitness model for the Italian interbank money market.

    PubMed

    De Masi, G; Iori, G; Caldarelli, G

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  16. Using proper regression methods for fitting the Langmuir model to sorption data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model, originally developed for the study of gas sorption to surfaces, is one of the most commonly used models for fitting phosphorus sorption data. There are good theoretical reasons, however, against applying this model to describe P sorption to soils. Nevertheless, the Langmuir model...

  17. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  18. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  19. On assessing model fit for distribution-free longitudinal models under missing data.

    PubMed

    Wu, P; Tu, X M; Kowalski, J

    2014-01-15

    The generalized estimating equation (GEE), a distribution-free, or semi-parametric, approach for modeling longitudinal data, is used in a wide range of behavioral, psychotherapy, pharmaceutical drug safety, and healthcare-related research studies. Most popular methods for assessing model fit are based on the likelihood function for parametric models, rendering them inappropriate for distribution-free GEE. One rare exception is a score statistic initially proposed by Tsiatis for logistic regression (1980) and later extended by Barnhart and Willamson to GEE (1998). Because GEE only provides valid inference under the missing completely at random assumption and missing values arising in most longitudinal studies do not follow such a restricted mechanism, this GEE-based score test has very limited applications in practice. We propose extensions of this goodness-of-fit test to address missing data under the missing at random assumption, a more realistic model that applies to most studies in practice. We examine the performance of the proposed tests using simulated data and demonstrate the utilities of such tests with data from a real study on geriatric depression and associated medical comorbidities. PMID:23897653

  20. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  1. Goodness-of-fit test for proportional subdistribution hazards model.

    PubMed

    Zhou, Bingqing; Fine, Jason; Laird, Glen

    2013-09-30

    This paper concerns using modified weighted Schoenfeld residuals to test the proportionality of subdistribution hazards for the Fine-Gray model, similar to the tests proposed by Grambsch and Therneau for independently censored data. We develop a score test for the time-varying coefficients based on the modified Schoenfeld residuals derived assuming a certain form of non-proportionality. The methods perform well in simulations and a real data analysis of breast cancer data, where the treatment effect exhibits non-proportional hazards.

  2. CPOPT : optimization for fitting CANDECOMP/PARAFAC models.

    SciTech Connect

    Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim

    2008-10-01

    Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

  3. Adaptation in tunably rugged fitness landscapes: the rough Mount Fuji model.

    PubMed

    Neidhart, Johannes; Szendro, Ivan G; Krug, Joachim

    2014-10-01

    Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage.

  4. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  5. Are pollination "syndromes" predictive? Asian dalechampia fit neotropical models.

    PubMed

    Armbruster, W Scott; Gong, Yan-Bing; Huang, Shuang-Quan

    2011-07-01

    Using pollination syndrome parameters and pollinator correlations with floral phenotype from the Neotropics, we predicted that Dalechampia bidentata Blume (Euphorbiaceae) in southern China would be pollinated by female resin-collecting bees between 12 and 20 mm in length. Observations in southwestern Yunnan Province, China, revealed pollination primarily by resin-collecting female Megachile (Callomegachile) faceta Bingham (Hymenoptera: Megachilidae). These bees, at 14 mm in length, were in the predicted size range, confirming the utility of syndromes and models developed in distant regions. Phenotypic selection analyses and estimation of adaptive surfaces and adaptive accuracies together suggest that the blossoms of D. bidentata are well adapted to pollination by their most common floral visitors. PMID:21670584

  6. Fitting measurement models to vocational interest data: are dominance models ideal?

    PubMed

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  7. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  8. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  9. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  10. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    NASA Astrophysics Data System (ADS)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.

  11. Transit Model Fitting in Processing Four Years of Kepler Science Data: New Features and Performance

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher; Jenkins, Jon Michael; Quintana, Elisa; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph

    2015-08-01

    We present new transit model fitting features and performance of the latest release (9.3, March 2015) of the Kepler Science Operations Center (SOC) Pipeline, which will be used for the final processing of four years of Kepler science data later this year. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate the planet detections. The standard limb-darkened transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is also fitted by a trapezoidal transit model with four parameters: transit epoch time, depth, duration and ratio of ingress time to duration. The fitted trapezoidal transit model is used in the diagnostic tests when the fit with the standard transit model fails or when the fit is not performed, e.g. for suspected eclipsing binaries. Additional parameters, such as the equilibrium temperature and effective stellar flux (i.e. insolation) of the planet candidate, are derived from the transit model fit parameters to characterize pipeline candidates for the search of Earth-size planets in the habitable zone. The uncertainties of all derived parameters are updated in the latest codebase to account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting for the TCEs identified by the Kepler SOC Pipeline are included in the DV reports and one-page report summaries, which are accessible by the science community at NASA Exoplanet Archive

  12. Evaluating the use of 'goodness-of-fit' measures in hydrologic and hydroclimatic model validation

    USGS Publications Warehouse

    Legates, D.R.; McCabe, G.J.

    1999-01-01

    Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the 'goodness-of-fit' of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and that additional evaluation measures (such as summary statistics and absolute error measures) should supplement model evaluation tools.Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the `goodness-of-fit' of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and

  13. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    PubMed Central

    Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN), intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue. PMID:23935704

  14. Maintaining adequate nutrition, not probiotic administration, prevents growth stunting and maintains skeletal muscle protein synthesis rates in a piglet model of colitis.

    PubMed

    Harding, Scott V; Adegoke, Olasunkanmi A J; Fraser, Keely G; Marliss, Errol B; Chevalier, Stéphanie; Kimball, Scot R; Jefferson, Leonard S; Wykes, Linda J

    2010-03-01

    Malnutrition and cytokine-induced catabolism are pervasive in children with inflammatory bowel diseases (IBD), however, the benefits of aggressive nutrition support or of probiotics on nutrient and functional deficiencies and growth remain unclear. Piglets with dextran sulfate (DS)-induced colitis consuming a 50% macronutrient restricted diet (C-MR) were compared with those receiving probiotics (C-MRP) or adequate nutrition (C-WN) and with healthy well-nourished controls (REF). C-WN versus REF had reduced growth (-34% chest circumference and -22% snout-to-rump length gain) and a tendency toward lesser weight gain, but no differences in skeletal muscle protein fractional synthesis rates (FSR) or initiation of translation via the mTOR pathway were observed. Compared with C-WN, the C-MR and C-MRP piglets had lower weight gain, growth, and skeletal muscle FSR, and lower phosphorylated p70S6K1 with higher eIF4E*4E-BP1, indicative of reduced initiation of protein translation. Finally, plasma leucine concentrations were positively correlated with weight and phosphorylated p70S6K1, whereas negatively correlated with eIF4E*4E-BP1. In conclusion, reductions in weight gain, growth, protein turnover, skeletal muscle FSR, and initiation of protein translation with moderate macronutrient restriction in colitis are not ameliorated by probiotic supplementation. However, maintaining adequate nutrient intake during colitis preserves whole body protein metabolism, but growth remains compromised.

  15. Is Model Fitting Necessary for Model-Based fMRI?

    PubMed

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models. PMID:26086934

  16. Is Model Fitting Necessary for Model-Based fMRI?

    PubMed

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

  17. Soft X-ray spectral fits of Geminga with model neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Meyer, R. D.; Pavlov, G. G.; Meszaros, P.

    1994-01-01

    The spectrum of the soft X-ray pulsar Geminga consists of two components, a softer one which can be interpreted as thermal-like radiation from the surface of the neutron star, and a harder one interpreted as radiation from a polar cap heated by relativistic particles. We have fitted the soft spectrum using a detailed magnetized hydrogen atmosphere model. The fitting parameters are the hydrogen column density, the effective temperature T(sub eff), the gravitational redshift z, and the distance to radius ratio, for different values of the magnetic field B. The best fits for this model are obtained when B less than or approximately 1 x 10(exp 12) G and z lies on the upper boundary of the explored range (z = 0.45). The values of T(sub eff) approximately = (2-3) x 10(exp 5) K are a factor of 2-3 times lower than the value of T(sub eff) obtained for blackbody fits with the same z. The lower T(sub eff) increases the compatibility with some proposed schemes for fast neutrino cooling of neutron stars (NSs) by the direct Urca process or by exotic matter, but conventional cooling cannot be excluded. The hydrogen atmosphere fits also imply a smaller distance to Geminga than that inferred from a blackbody fit. An accurate evaluation of the distance would require a better knowledge of the ROSAT Position Sensitive Proportional Counter (PSPC) response to the low-energy region of the incident spectrum. Our modeling of the soft component with a cooler magnetized atmosphere also implies that the hard-component fit requires a characteristic temperature which is higher (by a factor of approximately 2-3) and a surface area which is smaller (by a factor of 10(exp 3), compared to previous blackbody fits.

  18. Multiple likelihood estimation for calibration: tradeoffs in goodness-of-fit metrics for watershed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Price, K.; Purucker, T.; Kraemer, S.; Babendreier, J. E.

    2011-12-01

    Four nested sub-watersheds (21 to 10100 km^2) of the Neuse River in North Carolina are used to investigate calibration tradeoffs in goodness-of-fit metrics using multiple likelihood methods. Calibration of watershed hydrologic models is commonly achieved by optimizing a single goodness-of-fit metric to characterize simulated versus observed flows (e.g., R^2 and Nash-Sutcliffe Efficiency Coefficient, or NSE). However, each of these objective functions heavily weights a particular aspect of streamflow. For example, NSE and R^2 both emphasize high flows in evaluating simulation fit, while the Modified Nash-Sutcliffe Efficiency Coefficient (MNSE) emphasizes low flows. Other metrics, such as the ratio of the simulated versus observed flow standard deviations (SDR), prioritize overall flow variability. In this comparison, we use informal likelihood methods to investigate the tradeoffs of calibrating streamflow on three standard goodness-of-fit metrics (NSE, MNSE, and SDR), as well as an index metric that equally weights these three objective functions to address a range of flow characteristics. We present a flexible method that allows calibration targets to be determined by modeling goals. In this process, we begin by using Latin Hypercube Sampling (LHS) to reduce the simulations required to explore the full parameter space. The correlation structure of a large suite of goodness-of-fit metrics is explored to select metrics for use in an index function that incorporates a range of flow characteristics while avoiding redundancy. An iterative informal likelihood procedure is used to narrow parameter ranges after each simulation set to areas of the range with the most support from the observed data. A stopping rule is implemented to characterize the overall goodness-of-fit associated with the parameter set for each pass, with the best-fit pass distributions used as the calibrated set for the next simulation set. This process allows a great deal of flexibility. The process is

  19. FITS: A Framework for ITS--A Computational Model of Tutoring.

    ERIC Educational Resources Information Center

    Ikeda, Mitsuru; Mizoguchi, Riichiro

    1994-01-01

    Summarizes research activities concerning FITS, a Framework for Intelligent Tutoring Systems, and discusses the major results obtained thus far. Topics include system architecture; domain independent framework; student model module; expertise module; tutoring strategies; and a model of tutor's decision making, including knowledge sources and…

  20. Genetic Model Fitting in IQ, Assortative Mating & Components of IQ Variance.

    ERIC Educational Resources Information Center

    Capron, Christiane; Vetta, Adrian R.; Vetta, Atam

    1998-01-01

    The biometrical school of scientists who fit models to IQ data traces their intellectual ancestry to R. Fisher (1918), but their genetic models have no predictive value. Fisher himself was critical of the concept of heritability, because assortative mating, such as for IQ, introduces complexities into the study of a genetic trait. (SLD)

  1. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  2. Posterior predictive checks to quantify lack-of-fit in admixture models of latent population structure

    PubMed Central

    Mimno, David; Blei, David M.; Engelhardt, Barbara E.

    2015-01-01

    Admixture models are a ubiquitous approach to capture latent population structure in genetic samples. Despite the widespread application of admixture models, little thought has been devoted to the quality of the model fit or the accuracy of the estimates of parameters of interest for a particular study. Here we develop methods for validating admixture models based on posterior predictive checks (PPCs), a Bayesian method for assessing the quality of fit of a statistical model to a specific dataset. We develop PPCs for five population-level statistics of interest: within-population genetic variation, background linkage disequilibrium, number of ancestral populations, between-population genetic variation, and the downstream use of admixture parameters to correct for population structure in association studies. Using PPCs, we evaluate the quality of the admixture model fit to four qualitatively different population genetic datasets: the population reference sample (POPRES) European individuals, the HapMap phase 3 individuals, continental Indians, and African American individuals. We found that the same model fitted to different genomic studies resulted in highly study-specific results when evaluated using PPCs, illustrating the utility of PPCs for model-based analyses in large genomic studies. PMID:26071445

  3. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  4. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  5. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  6. A Short Commentary on "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Gentry, Marcia

    2010-01-01

    This article presents the author's brief comment on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib (2010) takes the reader through an interesting history of human innovation and processes and situates his theory within a productivist model. The deliberate attention to…

  7. The Expected Fitness Cost of a Mutation Fixation under the One-Dimensional Fisher Model

    NASA Astrophysics Data System (ADS)

    Zhang, Liqing; Watson, Layne T.

    This paper employs Fisher's model of adaptation to understand the expected fitness effect of fixing a mutation in a natural population. Fisher's model in one dimension admits a closed form solution for this expected fitness effect. A combination of different parameters, including the distribution of mutation lengths, population sizes, and the initial state that the population is in, are examined to see how they affect the expected fitness effect of state transitions. The results show that the expected fitness change due to the fixation of a mutation is always positive, regardless of the distributional shapes of mutation lengths, effective population sizes, and the initial state that the population is in. The further away the initial state of a population is from the optimal state, the slower the population returns to the optimal state. Effective population size (except when very small) has little effect on the expected fitness change due to mutation fixation. The always positive expected fitness change suggests that small populations may not necessarily be doomed due to the runaway process of fixation of deleterious mutations.

  8. Modeling and quantifying frequency-dependent fitness in microbial populations with cross-feeding interactions.

    PubMed

    Ribeck, Noah; Lenski, Richard E

    2015-05-01

    Coexistence of two or more populations by frequency-dependent selection is common in nature, and it often arises even in well-mixed experiments with microbes. If ecology is to be incorporated into models of population genetics, then it is important to represent accurately the functional form of frequency-dependent interactions. However, measuring this functional form is problematic for traditional fitness assays, which assume a constant fitness difference between competitors over the course of an assay. Here, we present a theoretical framework for measuring the functional form of frequency-dependent fitness by accounting for changes in abundance and relative fitness during a competition assay. Using two examples of ecological coexistence that arose in a long-term evolution experiment with Escherichia coli, we illustrate accurate quantification of the functional form of frequency-dependent relative fitness. Using a Monod-type model of growth dynamics, we show that two ecotypes in a typical cross-feeding interaction-such as when one bacterial population uses a byproduct generated by another-yields relative fitness that is linear with relative frequency.

  9. Development and design of a late-model fitness test instrument based on LabView

    NASA Astrophysics Data System (ADS)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  10. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    PubMed

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants. PMID:26614656

  11. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    PubMed

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants.

  12. Curve fitting toxicity test data: Which comes first, the dose response or the model?

    SciTech Connect

    Gully, J.; Baird, R.; Bottomley, J.

    1995-12-31

    The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.

  13. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  14. Soluble Model of Evolution and Extinction Dynamics in a Rugged Fitness Landscape

    NASA Astrophysics Data System (ADS)

    Sibani, Paolo

    1997-08-01

    We consider a continuum version of a previously introduced and numerically studied model of macroevolution [P. Sibani, M. R. Schimdt, and P. Alstrøm, Phys. Rev. Lett. 75, 2055 (1995)] in which agents evolve by an optimization process in a rugged fitness landscape and die due to their competitive interactions. We first formulate dynamical equations for the fitness distribution and the survival probability. Secondly, we analytically derive the t-2 law which characterizes the lifetime distribution of biological genera. Thirdly, we discuss other dynamical properties of the model as the rate of extinction and conclude with a brief discussion.

  15. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    PubMed

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large.

  16. Testing the Fitness Consequences of the Thermoregulatory and Parental Care Models for the Origin of Endothermy

    PubMed Central

    Clavijo-Baque, Sabrina; Bozinovic, Francisco

    2012-01-01

    The origin of endothermy is a puzzling phenomenon in the evolution of vertebrates. To address this issue several explicative models have been proposed. The main models proposed for the origin of endothermy are the aerobic capacity, the thermoregulatory and the parental care models. Our main proposal is that to compare the alternative models, a critical aspect is to determine how strongly natural selection was influenced by body temperature, and basal and maximum metabolic rates during the evolution of endothermy. We evaluate these relationships in the context of three main hypotheses aimed at explaining the evolution of endothermy, namely the parental care hypothesis and two hypotheses related to the thermoregulatory model (thermogenic capacity and higher body temperature models). We used data on basal and maximum metabolic rates and body temperature from 17 rodent populations, and used intrinsic population growth rate (Rmax) as a global proxy of fitness. We found greater support for the thermogenic capacity model of the thermoregulatory model. In other words, greater thermogenic capacity is associated with increased fitness in rodent populations. To our knowledge, this is the first test of the fitness consequences of the thermoregulatory and parental care models for the origin of endothermy. PMID:22606328

  17. A Comparison of Isoconversional and Model-Fitting Approaches to Kinetic Parameter Estimation and Application Predictions

    SciTech Connect

    Burnham, A K

    2006-05-17

    Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.

  18. The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske

    2011-06-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.

  19. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  20. Conducting Tetrad Tests of Model Fit and Contrasts of Tetrad-Nested Models: A New SAS Macro

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.; Bollen, Kenneth A.

    2005-01-01

    This article describes a SAS macro to assess model fit of structural equation models by employing a test of the model-implied vanishing tetrads. Use of this test has been limited in the past, in part due to the lack of software that fully automates the test in a user-friendly way. The current SAS macro provides a straightforward method for…

  1. IRT Model Fit Evaluation from Theory to Practice: Progress and Some Unanswered Questions

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2013-01-01

    In this commentary, the authors congratulate Professor Alberto Maydeu-Olivares on his article [EJ1023617: "Goodness-of-Fit Assessment of Item Response Theory Models, Measurement: Interdisciplinary Research and Perspectives," this issue] as it provides a much needed overview on the mathematical underpinnings of the theory behind the…

  2. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  3. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  4. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  5. Critique of "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Harris, Carole Ruth

    2010-01-01

    This article presents the author's comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" In his article, Ghassib (2010) provides an overview of the philosophical foundations that led to exact science, its role in what was later to become a driving force in the modern…

  6. On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally

    ERIC Educational Resources Information Center

    Blozis, Shelley A.

    2007-01-01

    This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…

  7. Super Kids--Superfit. A Comprehensive Fitness Intervention Model for Elementary Schools.

    ERIC Educational Resources Information Center

    Virgilio, Stephen J.; Berenson, Gerald S.

    1988-01-01

    Objectives and activities of the cardiovascular (CV) fitness program Super Kids--Superfit are related in this article. This exercise program is one component of the Heart Smart Program, a CV health intervention model for elementary school students. Program evaluation, parent education, and school and community intervention strategies are…

  8. Small-Sample Robust Estimators of Noncentrality-Based and Incremental Model Fit

    ERIC Educational Resources Information Center

    Herzog, Walter; Boomsma, Anne

    2009-01-01

    Traditional estimators of fit measures based on the noncentral chi-square distribution (root mean square error of approximation [RMSEA], Steiger's [gamma], etc.) tend to overreject acceptable models when the sample size is small. To handle this problem, it is proposed to employ Bartlett's (1950), Yuan's (2005), or Swain's (1975) correction of the…

  9. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  10. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  11. Review of Hisham Ghassib: Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?

    ERIC Educational Resources Information Center

    Neber, Heinz

    2010-01-01

    In this article, the author presents his comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?" Ghassib (2010) describes historical transformations of science from a marginal and non-autonomous activity which had been constrained by traditions to a self-autonomous,…

  12. Fitting multilevel models with ordinal outcomes: performance of alternative specifications and methods of estimation.

    PubMed

    Bauer, Daniel J; Sterba, Sonya K

    2011-12-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ when instead fitting multilevel cumulative logit models to ordinal data, maximum likelihood (ML), or penalized quasi-likelihood (PQL). ML and PQL are compared across variations in sample size, magnitude of variance components, number of outcome categories, and distribution shape. Fitting a multilevel linear model to ordinal outcomes is shown to be inferior in virtually all circumstances. PQL performance improves markedly with the number of ordinal categories, regardless of distribution shape. In contrast to binary data, PQL often performs as well as ML when used with ordinal data. Further, the performance of PQL is typically superior to ML when the data include a small to moderate number of clusters (i.e., ≤ 50 clusters).

  13. Impact of Missing Data on Person-Model Fit and Person Trait Estimation

    ERIC Educational Resources Information Center

    Zhang, Bo; Walker, Cindy M.

    2008-01-01

    The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…

  14. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  15. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    PubMed

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action.

  16. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    PubMed

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. PMID:27479466

  17. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM. PMID:26737125

  18. Phylogenetic Tree Reconstruction Accuracy and Model Fit when Proportions of Variable Sites Change across the Tree

    PubMed Central

    Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.

    2010-01-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636

  19. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  20. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  1. Active Contours Using Additive Local and Global Intensity Fitting Models for Intensity Inhomogeneous Image Segmentation

    PubMed Central

    Soomro, Shafiullah; Kim, Jeong Heon; Soomro, Toufique Ahmed

    2016-01-01

    This paper introduces an improved region based active contour method with a level set formulation. The proposed energy functional integrates both local and global intensity fitting terms in an additive formulation. Local intensity fitting term influences local force to pull the contour and confine it to object boundaries. In turn, the global intensity fitting term drives the movement of contour at a distance from the object boundaries. The global intensity term is based on the global division algorithm, which can better capture intensity information of an image than Chan-Vese (CV) model. Both local and global terms are mutually assimilated to construct an energy function based on a level set formulation to segment images with intensity inhomogeneity. Experimental results show that the proposed method performs better both qualitatively and quantitatively compared to other state-of-the-art-methods. PMID:27800011

  2. Modelling of the toe trajectory during normal gait using circle-fit approximation.

    PubMed

    Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan

    2016-10-01

    This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.

  3. Fitting the distribution of dry and wet spells with alternative probability models

    NASA Astrophysics Data System (ADS)

    Deni, Sayang Mohd; Jemain, Abdul Aziz

    2009-06-01

    The development of the rainfall occurrence model is greatly important not only for data-generation purposes, but also in providing informative resources for future advancements in water-related sectors, such as water resource management and the hydrological and agricultural sectors. Various kinds of probability models had been introduced to a sequence of dry (wet) days by previous researchers in the field. Based on the probability models developed previously, the present study is aimed to propose three types of mixture distributions, namely, the mixture of two log series distributions (LSD), the mixture of the log series Poisson distribution (MLPD), and the mixture of the log series and geometric distributions (MLGD), as the alternative probability models to describe the distribution of dry (wet) spells in daily rainfall events. In order to test the performance of the proposed new models with the other nine existing probability models, 54 data sets which had been published by several authors were reanalyzed in this study. Also, the new data sets of daily observations from the six selected rainfall stations in Peninsular Malaysia for the period 1975-2004 were used. In determining the best fitting distribution to describe the observed distribution of dry (wet) spells, a Chi-square goodness-of-fit test was considered. The results revealed that the new method proposed that MLGD and MLPD showed a better fit as more than half of the data sets successfully fitted the distribution of dry and wet spells. However, the existing models, such as the truncated negative binomial and the modified LSD, were also among the successful probability models to represent the sequence of dry (wet) days in daily rainfall occurrence.

  4. The Fitness Landscape of HIV-1 Gag: Advanced Modeling Approaches and Validation of Model Predictions by In Vitro Testing

    PubMed Central

    Omarjee, Saleha; Walker, Bruce D.; Chakraborty, Arup; Ndung'u, Thumbi

    2014-01-01

    Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model), generalizing our previous approach (Ising model) that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these) predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = −0.74, p = 3.6×10−6) are strongly correlated, and this was further strengthened in the regularized Ising model (r = −0.83, p = 3.7×10−12). Performance of the Potts model (r = −0.73, p = 9.7×10−9) was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  5. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    PubMed

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm.

  6. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    PubMed

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm. PMID:22624307

  7. Fitting parametric models of diffusion MRI in regions of partial volume

    NASA Astrophysics Data System (ADS)

    Eaton-Rosen, Zach; Cardoso, M. J.; Melbourne, Andrew; Orasanu, Eliza; Bainbridge, Alan; Kendall, Giles S.; Robertson, Nicola J.; Marlow, Neil; Ourselin, Sebastien

    2016-03-01

    Regional analysis is normally done by fitting models per voxel and then averaging over a region, accounting for partial volume (PV) only to some degree. In thin, folded regions such as the cerebral cortex, such methods do not work well, as the partial volume confounds parameter estimation. Instead, we propose to fit the models per region directly with explicit PV modeling. In this work we robustly estimate region-wise parameters whilst explicitly accounting for partial volume effects. We use a high-resolution segmentation from a T1 scan to assign each voxel in the diffusion image a probabilistic membership to each of k tissue classes. We rotate the DW signal at each voxel so that it aligns with the z-axis, then model the signal at each voxel as a linear superposition of a representative signal from each of the k tissue types. Fitting involves optimising these representative signals to best match the data, given the known probabilities of belonging to each tissue type that we obtained from the segmentation. We demonstrate this method improves parameter estimation in digital phantoms for the diffusion tensor (DT) and `Neurite Orientation Dispersion and Density Imaging' (NODDI) models. The method provides accurate parameter estimates even in regions where the normal approach fails completely, for example where partial volume is present in every voxel. Finally, we apply this model to brain data from preterm infants, where the thin, convoluted, maturing cortex necessitates such an approach.

  8. Improved cosmological model fitting of Planck data with a dark energy spike

    NASA Astrophysics Data System (ADS)

    Park, Chan-Gyung

    2015-06-01

    The Λ cold dark matter (Λ CDM ) model is currently known as the simplest cosmology model that best describes observations with a minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional Λ CDM one by constructing dark energy as the sum of the cosmological constant Λ and an additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and an early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about 5 times the number of additional parameters introduced and narrower constraints on the matter density and Hubble constant compared with the best-fit Λ CDM model. The significant improvement in reducing the chi square mainly comes from the better fitting of the Planck temperature power spectrum around the third (ℓ≈800 ) and sixth (ℓ≈1800 ) acoustic peaks. The likelihood ratio test and the Akaike information criterion suggest that the model of a dark energy spike is strongly favored by the current cosmological observations over the conventional Λ CDM model. However, based on the Bayesian information criterion which penalizes models with more parameters, the strong evidence supporting the presence of a dark energy spike disappears. Our result emphasizes that the alternative cosmological parameter estimation with even better fitting of the same observational data is allowed in Einstein's gravity.

  9. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID

  10. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID

  11. How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Montano, Rosa

    2013-01-01

    We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model (1PL) estimated by (marginal) maximum…

  12. The Blazar 3C 66A in 2003-2004: hadronic versus leptonic model fits

    SciTech Connect

    Reimer, A.

    2008-12-24

    The low-frequency peaked BL Lac object 3C 66A was the subject of an extensive multi-wavelength campaign from July 2003 till April 2004, which included quasi-simultaneous observations at optical, X-rays and very high energy gamma-rays. Here we apply the hadronic Synchrotron-Proton Blazar (SPB) model to the observed spectral energy distribution time-averaged over a flaring state, and compare the resulting model fits to those obtained from the application of the leptonic Synchrotron-Self-Compton (SSC) model. The results are used to identify diagnostic key predictions of the two blazar models for future multi-wavelength observations.

  13. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  14. Rainfall interception modelling: Is the wet bulb approach adequate to estimate mean evaporation rate from wet/saturated canopies in all forest types?

    NASA Astrophysics Data System (ADS)

    Pereira, F. L.; Valente, F.; David, J. S.; Jackson, N.; Minunno, F.; Gash, J. H.

    2016-03-01

    The Penman-Monteith equation has been widely used to estimate the maximum evaporation rate (E) from wet/saturated forest canopies, regardless of canopy cover fraction. Forests are then represented as a big leaf and interception loss considered essentially as a one-dimensional process. With increasing forest sparseness the assumptions behind this big leaf approach become questionable. In sparse forests it might be better to model E and interception loss at the tree level assuming that the individual tree crowns behave as wet bulbs ("wet bulb approach"). In this study, and for five different forest types and climate conditions, interception loss measurements were compared to modelled values (Gash's interception model) based on estimates of E by the Penman-Monteith and the wet bulb approaches. Results show that the wet bulb approach is a good, and less data demanding, alternative to estimate E when the forest canopy is fully ventilated (very sparse forests with a narrow canopy depth). When the canopy is not fully ventilated, the wet bulb approach requires a reduction of leaf area index to the upper, more ventilated parts of the canopy, needing data on the vertical leaf area distribution, which is seldom-available. In such cases, the Penman-Monteith approach seems preferable. Our data also show that canopy cover does not per se allow us to identify if a forest canopy is fully ventilated or not. New methodologies of sensitivity analyses applied to Gash's model showed that a correct estimate of E is critical for the proper modelling of interception loss.

  15. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes.

    PubMed

    Shekhar, Karthik; Ruberman, Claire F; Ferguson, Andrew L; Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses. PMID:24483484

  16. unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance

    USGS Publications Warehouse

    Fiske, Ian J.; Chandler, Richard B.

    2011-01-01

    Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientific questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mechanisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unified modeling interface. The R package unmarked provides such a unified modeling framework, including tools for data exploration, model fitting, model criticism, post-hoc analysis, and model comparison.

  17. Optimal circumference reduction of finger models for good prosthetic fit of a thimble-type prosthesis for distal finger amputations.

    PubMed

    Leow, M E; Prosthetist, C; Pho, R W

    2001-01-01

    The prosthetic fit of a thimble-type esthetic silicone prosthesis was retrospectively reviewed in 29 patients who were fitted following distal finger amputations. The aim was to correlate prosthetic fit with the magnitudes of circumference reduction in the finger models used to produce the prostheses and to identify the optimum reduction for the best outcome. A good fit is achieved primarily by making the prosthesis circumferentially smaller than the segment of the residual finger (residuum) over which it "cups". The percentage reduction in circumference of the finger model against the residuum model was calculated by dividing the difference in circumference between the residuum model and the finger model by the residuum model circumference and multiplying the result by 100. The computed percentage circumference reduction in the finger models ranged from small (1-3), moderate (5-7), to large (8-9). Twelve of 15 patients whose finger models had between one to three circumference reductions had a loose prosthetic fit. Only two of 14 patients who had a larger model circumference reduction of between five to nine had loose-fitting prostheses. Two of five patients who had eight to nine model circumference reduction had an uncomfortably tight prosthetic fit. A 5-7% circumference reduction in the finger model was shown in this study to best translate into good fit of a thimble-type prosthesis for distal finger amputations.

  18. Agricultural case studies of classification accuracy, spectral resolution, and model over-fitting.

    PubMed

    Nansen, Christian; Geremias, Leandro Delalibera; Xue, Yingen; Huang, Fangneng; Parra, Jose Roberto

    2013-11-01

    This paper describes the relationship between spectral resolution and classification accuracy in analyses of hyperspectral imaging data acquired from crop leaves. The main scope is to discuss and reduce the risk of model over-fitting. Over-fitting of a classification model occurs when too many and/or irrelevant model terms are included (i.e., a large number of spectral bands), and it may lead to low robustness/repeatability when the classification model is applied to independent validation data. We outline a simple way to quantify the level of model over-fitting by comparing the observed classification accuracies with those obtained from explanatory random data. Hyperspectral imaging data were acquired from two crop-insect pest systems: (1) potato psyllid (Bactericera cockerelli) infestations of individual bell pepper plants (Capsicum annuum) with the acquisition of hyperspectral imaging data under controlled-light conditions (data set 1), and (2) sugarcane borer (Diatraea saccharalis) infestations of individual maize plants (Zea mays) with the acquisition of hyperspectral imaging data from the same plants under two markedly different image-acquisition conditions (data sets 2a and b). For each data set, reflectance data were analyzed based on seven spectral resolutions by dividing 160 spectral bands from 405 to 907 nm into 4, 16, 32, 40, 53, 80, or 160 bands. In the two data sets, similar classification results were obtained with spectral resolutions ranging from 3.1 to 12.6 nm. Thus, the size of the initial input data could be reduced fourfold with only a negligible loss of classification accuracy. In the analysis of data set 1, several validation approaches all demonstrated consistently that insect-induced stress could be accurately detected and that therefore there was little indication of model over-fitting. In the analyses of data set 2, inconsistent validation results were obtained and the observed classification accuracy (81.06%) was only a few percentage

  19. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment

    PubMed Central

    Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F.

    2009-01-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database. PMID:20046797

  20. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    PubMed

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database. PMID:20046797

  1. Do Physical Proximity and Availability of Adequate Infrastructure at Public Health Facility Increase Institutional Delivery? A Three Level Hierarchical Model Approach.

    PubMed

    Patel, Rachana; Ladusingh, Laishram

    2015-01-01

    This study aims to examine the inter-district and inter-village variation of utilization of health services for institutional births in EAG states in presence of rural health program and availability of infrastructures. District Level Household Survey-III (2007-08) data on delivery care and facility information was used for the purpose. Bivariate results examined the utilization pattern by states in presence of correlates of women related while a three-level hierarchical multilevel model illustrates the effect of accessibility, availability of health facility and community health program variables on the utilization of health services for institutional births. The study found a satisfactory improvement in state Rajasthan, Madhya Pradesh and Orissa, importantly, in Bihar and Uttaranchal. The study showed that increasing distance from health facility discouraged institutional births and there was a rapid decline of more than 50% for institutional delivery as the distance to public health facility exceeded 10 km. Additionally, skilled female health worker (ANM) and observed improved public health facility led to significantly increase the probability of utilization as compared to non-skilled ANM and not-improved health centers. Adequacy of essential equipment/laboratory services required for maternal care significantly encouraged deliveries at public health facility. District/village variables neighborhood poverty was negatively related to institutional delivery while higher education levels in the village and women's residing in more urbanized districts increased the utilization. "Inter-district" variation was 14 percent whereas "between-villages" variation for the utilization was 11 percent variation once controlled for all the three-level variables in the model. This study suggests that the mere availability of health facilities is necessary but not sufficient condition to promote utilization until the quality of service is inadequate and inaccessible considering

  2. Do Physical Proximity and Availability of Adequate Infrastructure at Public Health Facility Increase Institutional Delivery? A Three Level Hierarchical Model Approach

    PubMed Central

    Patel, Rachana; Ladusingh, Laishram

    2015-01-01

    This study aims to examine the inter-district and inter-village variation of utilization of health services for institutional births in EAG states in presence of rural health program and availability of infrastructures. District Level Household Survey-III (2007–08) data on delivery care and facility information was used for the purpose. Bivariate results examined the utilization pattern by states in presence of correlates of women related while a three-level hierarchical multilevel model illustrates the effect of accessibility, availability of health facility and community health program variables on the utilization of health services for institutional births. The study found a satisfactory improvement in state Rajasthan, Madhya Pradesh and Orissa, importantly, in Bihar and Uttaranchal. The study showed that increasing distance from health facility discouraged institutional births and there was a rapid decline of more than 50% for institutional delivery as the distance to public health facility exceeded 10 km. Additionally, skilled female health worker (ANM) and observed improved public health facility led to significantly increase the probability of utilization as compared to non-skilled ANM and not-improved health centers. Adequacy of essential equipment/laboratory services required for maternal care significantly encouraged deliveries at public health facility. District/village variables neighborhood poverty was negatively related to institutional delivery while higher education levels in the village and women’s residing in more urbanized districts increased the utilization. “Inter-district” variation was 14 percent whereas “between-villages” variation for the utilization was 11 percent variation once controlled for all the three-level variables in the model. This study suggests that the mere availability of health facilities is necessary but not sufficient condition to promote utilization until the quality of service is inadequate and inaccessible

  3. Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data.

  4. Validation of a Best-Fit Pharmacokinetic Model for Scopolamine Disposition after Intranasal Administration

    NASA Technical Reports Server (NTRS)

    Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.

    2015-01-01

    An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.

  5. Effects of new mutations on fitness: insights from models and data.

    PubMed

    Bataillon, Thomas; Bailey, Susan F

    2014-07-01

    The rates and properties of new mutations affecting fitness have implications for a number of outstanding questions in evolutionary biology. Obtaining estimates of mutation rates and effects has historically been challenging, and little theory has been available for predicting the distribution of fitness effects (DFE); however, there have been recent advances on both fronts. Extreme-value theory predicts the DFE of beneficial mutations in well-adapted populations, while phenotypic fitness landscape models make predictions for the DFE of all mutations as a function of the initial level of adaptation and the strength of stabilizing selection on traits underlying fitness. Direct experimental evidence confirms predictions on the DFE of beneficial mutations and favors distributions that are roughly exponential but bounded on the right. A growing number of studies infer the DFE using genomic patterns of polymorphism and divergence, recovering a wide range of DFE. Future work should be aimed at identifying factors driving the observed variation in the DFE. We emphasize the need for further theory explicitly incorporating the effects of partial pleiotropy and heterogeneity in the environment on the expected DFE.

  6. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  7. Model Fit to Experimental Data for Foam-Assisted Deep Vadose Zone Remediation

    SciTech Connect

    Roostapour, A.; Lee, G.; Zhong, Lirong; Kam, Seung I.

    2014-01-15

    Foam has been regarded as a promising means of remeidal amendment delivery to overcome subsurface heterogeneity in subsurface remediation processes. This study investigates how a foam model, developed by Method of Characteristics and fractional flow analysis in the companion paper of Roostapour and Kam (2012), can be applied to make a fit to a set of existing laboratory flow experiments (Zhong et al., 2009) in an application relevant to deep vadose zone remediation. This study reveals a few important insights regarding foam-assisted deep vadose zone remediation: (i) the mathematical framework established for foam modeling can fit typical flow experiments matching wave velocities, saturation history , and pressure responses; (ii) the set of input parameters may not be unique for the fit, and therefore conducting experiments to measure basic model parameters related to relative permeability, initial and residual saturations, surfactant adsorption and so on should not be overlooked; and (iii) gas compressibility plays an important role for data analysis, thus should be handled carefully in laboratory flow experiments. Foam kinetics, causing foam texture to reach its steady-state value slowly, may impose additional complications.

  8. Design and verifications of an eye model fitted with contact lenses for wavefront measurement systems

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Chen, Jia-Hong; Chang, Rong-Jie; Wang, Chung-Yen; Hsu, Wei-Yao; Wang, Pei-Jen

    2015-09-01

    Contact lenses are typically measured by the wet-box method because of the high optical power resulting from the anterior central curvature of cornea, even though the back vertex power of the lenses are small. In this study, an optical measurement system based on the Shack-Hartmann wavefront principle was established to investigate the aberrations of soft contact lenses. Fitting conditions were micmicked to study the optical design of an eye model with various topographical shapes in the anterior cornea. Initially, the contact lenses were measured by the wet-box method, and then by fitting the various topographical shapes of cornea to the eye model. In addition, an optics simulation program was employed to determine the sources of errors and assess the accuracy of the system. Finally, samples of soft contact lenses with various Diopters were measured; and, both simulations and experimental results were compared for resolving the controversies of fitting contact lenses to an eye model for optical measurements. More importantly, the results show that the proposed system can be employed for study of primary aberrations in contact lenses.

  9. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  10. Uncertainty Estimation in Fitting Parameterized Models to Solar Flare Hard X-ray Spectra

    NASA Astrophysics Data System (ADS)

    Ireland, Jack; Tolbert, A. K.; Holman, G. D.; Dennis, B. R.; Schwartz, R. A.

    2012-05-01

    We compare four different methods of estimating the uncertainty in fit parameters when fitting models to Ramaty High Energy Solar Spectroscopic Imager (RHESSI) spectral data. Two flare spectra are studied: one from the GOES (Geostationary Operational Environmental Satellite) X1.3 class flare of 19-January-2005, and the other from the X4.8 flare of 23-July-2002. Three of our methods rely on assumptions about the shape of the hyper-surface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, evaluated around the minimum value of the hyper-surface, to generate uncertainty estimates. The fourth method is based on Bayesian data analysis techniques. The four methods give approximately equal uncertainty estimates for the 19-January-2005 model parameters, but give very different uncertainty estimates for the 23-July-2002 model parameters. This is because the assumptions required for the first three methods hold approximately for the 19-January-2005 analysis, but do not hold for the 23-July-2002 analysis. The Bayesian-based method does not require these assumptions, and so can give reliable uncertainty estimates regardless of the shape of the hyper-surface formed by the model fit to the data. We show that for the 23-July-2002 spectrum, there is a 95% probability that the low energy cutoff to the model distribution of emitting flare electrons lies below approximately 40keV, and a 68% probability that it lies in the estimated range 7-36 keV. The most probable flare electron energy flux is approximately 1028.1 erg-1sec-1 with a 68% credible interval estimated at 1028.1-29.1 erg-1sec-1, and a 95% credible interval estimated at 1028.0-30.3 erg-1sec-1. For the 19-January-2005 spectrum, these quantities are more tightly constrained to 105±4 keV and 1027.66±0.01 erg-1sec-1 (68% uncertainties). The reasons for these disparate results are discussed. This work is funded by the NASA Solar and Heliospheric

  11. Role Modeling Attitudes, Physical Activity and Fitness Promoting Behaviors of Prospective Physical Education Specialists and Non-Specialists.

    ERIC Educational Resources Information Center

    Cardinal, Bradley J.; Cardinal, Marita K.

    2002-01-01

    Compared the role modeling attitudes and physical activity and fitness promoting behaviors of undergraduate students majoring in physical education and in elementary education. Student teacher surveys indicated that physical education majors had more positive attitudes toward role modeling physical activity and fitness promoting behaviors and…

  12. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    PubMed

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report. PMID:19851702

  13. Fitting a Two-Component Scattering Model to Polarimetric SAR Data from Forests

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    2007-01-01

    Two simple scattering mechanisms are fitted to polarimetric synthetic aperture radar (SAR) observations of forests. The mechanisms are canopy scatter from a reciprocal medium with azimuthal symmetry and a ground scatter term that can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, which is seen through a layer of vertically oriented scatterers. The model is shown to represent the behavior of polarimetric backscatter from a tropical forest and two temperate forest sites by applying it to data from the National Aeronautic and Space Agency/Jet Propulsion Laboratory's Airborne SAR (AIRSAR) system. Scattering contributions from the two basic scattering mechanisms are estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. This model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem. The model is used to develop an understanding of the ground-trunk double-bounce scattering that is present in the data, which is seen to vary considerably as a function of incidence angle. Two parameters in the model fit appear to exhibit sensitivity to vegetation canopy structure, which is worth further exploration. Results from the model fit for the ground scattering term are compared with estimates from a forward model and shown to be in good agreement. The behavior of the scattering from the ground-trunk interaction is consistent with the presence of a pseudo-Brewster angle effect for the air-trunk scattering interface. If the Brewster angle is known, it is possible to directly estimate the real part of the dielectric constant of the trunks, a key variable in forward modeling of backscatter from forests. It is also shown how, with a priori knowledge of the forest height, an estimate for the

  14. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments.

  15. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. PMID:27505294

  16. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  17. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    NASA Astrophysics Data System (ADS)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  18. T Dwarfs Model Fits for Spectral Standards at Low Spectral Resolution

    NASA Astrophysics Data System (ADS)

    Giorla, Paige; Rice, Emily L.; Douglas, Stephanie T.; Mace, Gregory N.; McLean, Ian S.; Martin, Emily C.; Logsdon, Sarah E.

    2015-01-01

    We present model fits to the T dwarf spectral standards which cover spectral types from T0 to T8. For a complete spectral range analysis, we have included a T9 object which is not considered a spectral standard. We have low-resolution (R~120) SpeX Prism spectra and a variety of higher resolution (R~1,000-25,000) spectra for all nine of these objects. The synthetic spectra are from the BT-SETTL 2013 models. We compare the best fit parameters from low resolution spectra to results from the higher resolution fits of prominent spectral type dependent features, where possible. Using the T dwarf standards to calibrate the effective temperature and gravity parameters for each spectral type, we will expand our analysis to a larger, more varied sample, which includes over one hundred field T dwarfs, for which we have a variety of low, medium, and high resolution spectra from the SpeX Prism Library and the NIRSPEC Brown Dwarf Spectroscopic Survey. This sample includes a handful of peculiar and red T dwarfs, for which we explore the causes of their non-normalcy.

  19. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  20. Calculating the parameters of full lightning impulses using model-based curve fitting

    SciTech Connect

    McComb, T.R.; Lagnese, J.E. )

    1991-10-01

    In this paper a brief review is presented of the techniques used for the evaluation of the parameters of high voltage impulses and the problems encountered. The determination of the best smooth curve through oscillations on a high voltage impulse is the major problem limiting the automatic processing of digital records of impulses. Non-linear regression, based on simple models, is applied to the analysis of simulated and experimental data of full lightning impulses. Results of model fitting to four different groups of impulses are presented and compared with some other methods. Plans for the extension of this work are outlined.

  1. Extended-Drude model to fit infrared conductivity cuprate laser-ablated films

    SciTech Connect

    Pessaud, S.; Sousa, D. de . Centre de Recherche sur la Physique des Hautes Temperatures); Lobo, R. ); Gervais, F. . Lab. d'Electrodynamique des Materiaux Avances)

    1998-12-20

    An extended-Drude model, implying a simple form for the self-energy function of the mobile charge-carrier response, has been applied to fitting the infrared and visible reflectivity spectra of simple cuprates. Excellent fits are obtained in a wide spectral range, from 4 meV to 4 eV, with a very restricted number of adjustable parameters. The optical conductivity obtained with this procedure is highly different from the Kramers-Kronig transformation of reflectivity spectra. The same procedure has been applied to characterize the infrared conductivity of multi-target laser-ablated films built via intergrowth of YBa[sub 2]Cu[sub 3]O[sub 7] and MCuO[sub 2] (M = Ca, Sr).

  2. The effects of floral mimics and models on each others' fitness

    PubMed Central

    Anderson, Bruce; Johnson, Steven D

    2006-01-01

    Plants that lack floral rewards may nevertheless attract pollinators by mimicking the flowers of rewarding plants. It has been suggested that both mimics and models should suffer reduced fitness when mimics are abundant relative to their models. By manipulating the relative densities of an orchid mimic Disa nivea and its rewarding model Zaluzianskya microsiphon in small experimental patches within a larger population we demonstrated that the mimic does indeed suffer reduced pollination success when locally common relative to its model. Behavioural experiments suggest that this phenomenon results from the tendency of the long-proboscid fly pollinator to avoid visits to neighbouring plants when encountering the mimic. No negative effect of the mimic on the pollination success of the model was detected. We propose that changes in pollinator flight behaviour, rather than pollinator conditioning, are likely to account for negative frequency-dependent reproductive success in deceptive orchids. PMID:16627282

  3. 34 CFR 85.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Adequate evidence. 85.900 Section 85.900 Education Office of the Secretary, Department of Education GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support...

  4. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  5. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  6. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  7. 21 CFR 1404.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient...

  8. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    SciTech Connect

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  9. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  10. Model fitting of kink waves in the solar atmosphere: Gaussian damping and time-dependence

    NASA Astrophysics Data System (ADS)

    Morton, R. J.; Mooroogen, K.

    2016-09-01

    Aims: Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. Methods: The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show how they can be assessed using various statistical tests. In particular we discuss the Kolmogorov-Smirnoff one and two sample tests, as well as the runs test. We also highlight the importance of including any observational trend line in the model-fitting process. Results: To demonstrate the methodology, an observation of an oscillating coronal loop undergoing standing kink motion is used. The model comparison techniques provide evidence that a Gaussian damping profile provides a better description of the observed wave attenuation than the often used exponential profile. This supports previous analysis from Pascoe et al. (2016, A&A, 585, L6). Further, we use the model comparison to provide evidence of time-dependent wave properties of a kink oscillation, attributing the behaviour to the thermodynamic evolution of the local plasma.

  11. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . PMID:26584470

  12. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    NASA Technical Reports Server (NTRS)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Shoulder injury is one of the most severe risks that have the potential to impair crewmembers' performance and health in long duration space flight. Overall, 64% of crewmembers experience shoulder pain after extra-vehicular training in a space suit, and 14% of symptomatic crewmembers require surgical repair (Williams & Johnson, 2003). Suboptimal suit fit, in particular at the shoulder region, has been identified as one of the predominant risk factors. However, traditional suit fit assessments and laser scans represent only a single person's data, and thus may not be generalized across wide variations of body shapes and poses. The aim of this work is to develop a software tool based on a statistical analysis of a large dataset of crewmember body shapes. This tool can accurately predict the skin deformation and shape variations for any body size and shoulder pose for a target population, from which the geometry can be exported and evaluated against suit models in commercial CAD software. A preliminary software tool was developed by statistically analyzing 150 body shapes matched with body dimension ranges specified in the Human-Systems Integration Requirements of NASA ("baseline model"). Further, the baseline model was incorporated with shoulder joint articulation ("articulation model"), using additional subjects scanned in a variety of shoulder poses across a pre-specified range of motion. Scan data was cleaned and aligned using body landmarks. The skin deformation patterns were dimensionally reduced and the co-variation with shoulder angles was analyzed. A software tool is currently in development and will be presented in the final proceeding. This tool would allow suit engineers to parametrically generate body shapes in strategically targeted anthropometry dimensions and shoulder poses. This would also enable virtual fit assessments, with which the contact volume and clearance between the suit and body surface can be predictively quantified at reduced time and

  13. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  14. Empirical evaluation reveals best fit of a logistic mutation model for human Y-chromosomal microsatellites.

    PubMed

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-12-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father-son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike's information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion. PMID:21968190

  15. Empirical Evaluation Reveals Best Fit of a Logistic Mutation Model for Human Y-Chromosomal Microsatellites

    PubMed Central

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-01-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father–son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike’s information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion. PMID:21968190

  16. MAGNETICALLY AND BARYONICALLY DOMINATED PHOTOSPHERIC GAMMA-RAY BURST MODEL FITS TO FERMI-LAT OBSERVATIONS

    SciTech Connect

    Veres, Peter; Meszaros, Peter; Zhang, Bin-Bin

    2013-02-10

    We consider gamma-ray burst models where the radiation is dominated by a photospheric region providing the MeV Band spectrum, and an external shock region responsible for the GeV radiation via inverse Compton scattering. We parameterize the initial dynamics through an acceleration law {Gamma}{proportional_to}r {sup {mu}}, with {mu} between 1/3 and 1 to represent the range between an extreme magnetically dominated and a baryonically dominated regime, depending also on the magnetic field configuration. We compare these models to several bright Fermi-LAT bursts, and show that both the time-integrated and the time-resolved spectra, where available, can be well described by these models. We discuss the parameters which result from these fits, and discuss the relative merits and shortcomings of the two models.

  17. Mind the Gap! Implications of a Person-Environment Fit Model of Intellectual Disability for Students, Educators, and Schools

    ERIC Educational Resources Information Center

    Thompson, James R.; Wehmeyer, Michael L.; Hughes, Carolyn

    2010-01-01

    A person-environment fit conceptualization of intellectual disability (ID) requires educators to focus on the gap between a student's competencies and the demands of activities and settings in schools. In this article the implications of the person-environment fit conceptual model are considered in regard to instructional benefits, special…

  18. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  19. SSC Model Fits to Simultaneous Fermi and CAO observations of Bl Lac's

    NASA Astrophysics Data System (ADS)

    Gordon, Tyler; Macomb, Daryl J.; Hand, Jared; Norris, Jay P.; Long, Min

    2016-01-01

    The Challis Astronomical Observatory (CAO) has been surveying a sample of blazar-type AGN since 2010. The CAO blazar sample includes4 3 sources - comprising 30 FSRQs, 15 BL Lacs, one radio galaxy and four unclassified sources - covering a redshift range 0.02 < z < 2. Observations are carried out in BVRI filters. Here we describe photometric results on a small sample emphasizing BL Lacs. We combine the CAO data with Fermi/LAT data and explore the suitability of fits to the data using the uniform conical jet model of Potter and Cotter (MNRAS, 2012, 423, 756-765).

  20. Limited-information Goodness-of-fit Testing of Hierarchical Item Factor Models

    PubMed Central

    Cai, Li; Hansen, Mark

    2013-01-01

    In applications of item response theory, assessment of model fit is a critical issue. Recently, limited-information goodness-of-fit testing has received increased attention in the psychometrics literature. In contrast to full-information test statistics such as Pearson’s X2 or the likelihood ratio G2, these limited-information tests utilise lower order marginal tables rather than the full contingency table. A notable example is Maydeu-Olivares and colleagues’ M2 family of statistics based on univariate and bivariate margins. When the contingency table is sparse, tests based on M2 retain better Type I error rate control than the full-information tests and can be more powerful. While in principle the M2 statistic can be extended to test hierarchical multidimensional item factor models (e.g., bifactor and testlet models), the computation is non-trivial. To obtain M2, a researcher often has to obtain (many thousands of) marginal probabilities, derivatives, and weights. Each of these must be approximated with high-dimensional numerical integration. We propose a dimension reduction method that can take advantage of the hierarchical factor structure so that the integrals can be approximated far more efficiently. We also propose a new test statistic that can be substantially better calibrated and more powerful than the original M2 statistic when the test is long and the items are polytomous. We use simulations to demonstrate the performance of our new methods and illustrate their effectiveness with applications to real data. PMID:22642552

  1. A Pearson-type goodness-of-fit test for stationary and time-continuous Markov regression models.

    PubMed

    Aguirre-Hernández, R; Farewell, V T

    2002-07-15

    Markov regression models describe the way in which a categorical response variable changes over time for subjects with different explanatory variables. Frequently it is difficult to measure the response variable on equally spaced discrete time intervals. Here we propose a Pearson-type goodness-of-fit test for stationary Markov regression models fitted to panel data. A parametric bootstrap algorithm is used to study the distribution of the test statistic. The proposed technique is applied to examine the fit of a Markov regression model used to identify markers for disease progression in psoriatic arthritis.

  2. A fungal growth model fitted to carbon-limited dynamics of Rhizoctonia solani.

    PubMed

    Jeger, M J; Lamour, A; Gilligan, C A; Otten, W

    2008-01-01

    Here, a quasi-steady-state approximation was used to simplify a mathematical model for fungal growth in carbon-limiting systems, and this was fitted to growth dynamics of the soil-borne plant pathogen and saprotroph Rhizoctonia solani. The model identified a criterion for invasion into carbon-limited environments with two characteristics driving fungal growth, namely the carbon decomposition rate and a measure of carbon use efficiency. The dynamics of fungal spread through a population of sites with either low (0.0074 mg) or high (0.016 mg) carbon content were well described by the simplified model with faster colonization for the carbon-rich environment. Rhizoctonia solani responded to a lower carbon availability by increasing the carbon use efficiency and the carbon decomposition rate following colonization. The results are discussed in relation to fungal invasion thresholds in terms of carbon nutrition. PMID:18312538

  3. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  4. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  5. A fungal growth model fitted to carbon-limited dynamics of Rhizoctonia solani.

    PubMed

    Jeger, M J; Lamour, A; Gilligan, C A; Otten, W

    2008-01-01

    Here, a quasi-steady-state approximation was used to simplify a mathematical model for fungal growth in carbon-limiting systems, and this was fitted to growth dynamics of the soil-borne plant pathogen and saprotroph Rhizoctonia solani. The model identified a criterion for invasion into carbon-limited environments with two characteristics driving fungal growth, namely the carbon decomposition rate and a measure of carbon use efficiency. The dynamics of fungal spread through a population of sites with either low (0.0074 mg) or high (0.016 mg) carbon content were well described by the simplified model with faster colonization for the carbon-rich environment. Rhizoctonia solani responded to a lower carbon availability by increasing the carbon use efficiency and the carbon decomposition rate following colonization. The results are discussed in relation to fungal invasion thresholds in terms of carbon nutrition.

  6. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-01-01

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed. PMID:26054728

  7. Fitting response models of benthic community structure to abiotic variables in a polluted estuarine system

    NASA Astrophysics Data System (ADS)

    González-Oreja, José Antonio; Saiz-Salinas, José Ignacio

    1999-07-01

    Models of the macrozoobenthic community responses to abiotic variables measured in the polluted Bilbao estuary were obtained by multiple linear regression analyses. Total, Oligochaeta and Nematoda abundance and biomass were considered as dependent variables. Intertidal level, dissolved oxygen at the bottom of the water column (DOXB) and organic content of the sediment were selected by the analyses as the three principal explanatory variables. Goodness-of-fit of the models was high ( overlinex=71.3% ). Total abundance and biomass increased as a linear function of DOXB. The principal outcome of the vast sewage scheme currently in progress in the study area is an important contributor of increasing DOXB levels. The models exposed in this paper will serve as a tool to evaluate the expected changes in the near future.

  8. Fitting mathematical models to describe the rheological behaviour of chocolate pastes

    NASA Astrophysics Data System (ADS)

    Barbosa, Carla; Diogo, Filipa; Alves, M. Rui

    2016-06-01

    The flow behavior is of utmost importance for the chocolate industry. The objective of this work was to study two mathematical models, Casson and Windhab models that can be used to fit chocolate rheological data and evaluate which better infers or previews the rheological behaviour of different chocolate pastes. Rheological properties (viscosity, shear stress and shear rates) were obtained with a rotational viscometer equipped with a concentric cylinder. The chocolate samples were white chocolate and chocolate with varying percentages in cacao (55%, 70% and 83%). The results showed that the Windhab model was the best to describe the flow behaviour of all the studied samples with higher determination coefficients (r2 > 0.9).

  9. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions.

    PubMed

    Siebert, Xavier; Navaza, Jorge

    2009-07-01

    Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30-10 A range and sometimes even beyond 10 A. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/.

  10. Goodness-of-Fit Tests and Model Diagnostics for Negative Binomial Regression of RNA Sequencing Data

    PubMed Central

    Mi, Gu; Di, Yanming; Schafer, Daniel W.

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models. PMID:25787144

  11. A diffusion process to model generalized von Bertalanffy growth patterns: fitting to real data.

    PubMed

    Román-Román, Patricia; Romero, Desirée; Torres-Ruiz, Francisco

    2010-03-01

    The von Bertalanffy growth curve has been commonly used for modeling animal growth (particularly fish). Both deterministic and stochastic models exist in association with this curve, the latter allowing for the inclusion of fluctuations or disturbances that might exist in the system under consideration which are not always quantifiable or may even be unknown. This curve is mainly used for modeling the length variable whereas a generalized version, including a new parameter b > or = 1, allows for modeling both length and weight for some animal species in both isometric (b = 3) and allometric (b not = 3) situations. In this paper a stochastic model related to the generalized von Bertalanffy growth curve is proposed. This model allows to investigate the time evolution of growth variables associated both with individual behaviors and mean population behavior. Also, with the purpose of fitting the above-mentioned model to real data and so be able to forecast and analyze particular characteristics, we study the maximum likelihood estimation of the parameters of the model. In addition, and regarding the numerical problems posed by solving the likelihood equations, a strategy is developed for obtaining initial solutions for the usual numerical procedures. Such strategy is validated by means of simulated examples. Finally, an application to real data of mean weight of swordfish is presented.

  12. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    SciTech Connect

    Howarth, Richard J.

    2001-12-15

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1{sup o} meridian arc, and the length of a pendulum beating seconds, as a function of sin{sup 2}(latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had

  13. Fitting multilevel models in complex survey data with design weights: Recommendations

    PubMed Central

    2009-01-01

    Background Multilevel models (MLM) offer complex survey data analysts a unique approach to understanding individual and contextual determinants of public health. However, little summarized guidance exists with regard to fitting MLM in complex survey data with design weights. Simulation work suggests that analysts should scale design weights using two methods and fit the MLM using unweighted and scaled-weighted data. This article examines the performance of scaled-weighted and unweighted analyses across a variety of MLM and software programs. Methods Using data from the 2005–2006 National Survey of Children with Special Health Care Needs (NS-CSHCN: n = 40,723) that collected data from children clustered within states, I examine the performance of scaling methods across outcome type (categorical vs. continuous), model type (level-1, level-2, or combined), and software (Mplus, MLwiN, and GLLAMM). Results Scaled weighted estimates and standard errors differed slightly from unweighted analyses, agreeing more with each other than with unweighted analyses. However, observed differences were minimal and did not lead to different inferential conclusions. Likewise, results demonstrated minimal differences across software programs, increasing confidence in results and inferential conclusions independent of software choice. Conclusion If including design weights in MLM, analysts should scale the weights and use software that properly includes the scaled weights in the estimation. PMID:19602263

  14. Modeling the Time Evolution of QSH Equilibria in MST Plasmas Using V3FIT

    NASA Astrophysics Data System (ADS)

    Boguski, J.; Nornberg, M.; Munaretto, S.; Chapman, B. E.; Cianciosa, M.; Terry, P. W.; Hanson, J.

    2015-11-01

    High current and low density RFP plasmas tend towards a 3D configuration, called Quasi-Single Helicity (QSH), characterized by a dominant core helical mode. V3FIT utilizes multiple internal and edge diagnostics to reconstruct the non-axisymmetric magnetic equilibrium of the QSH state. Performing multiple reconstructions at different stages in the QSH cycle is used to learn about the time dynamics of the QSH state. Recent work on modeling a shear-suppression mechanism for QSH formation has produced a predator-prey model of the time dynamics that reproduces the observed behavior, in particular the increased persistence of the QSH state with increased plasma current. Either magnetic or flow shear can facilitate QSH formation. The magnetic shear dependence of QSH is analyzed using V3FIT reconstructions of magnetic equilibrium constrained by internal measurements of density and temperature as well as soft x-ray emission. Fluctuations in the flux surface structure are compared against the measured temperature and density fluctuations and the reconstructed temperature and density profiles are examined to look for evidence of barriers to particle and heat transport. This material is based upon work supported by the U.S. DOE.

  15. Travelling wave expansion: a model fitting approach to the inverse problem of elasticity reconstruction.

    PubMed

    Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph

    2011-08-01

    In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques. PMID:21813354

  16. Total Force Fitness in units part 1: military demand-resource model.

    PubMed

    Bates, Mark J; Fallesen, Jon J; Huey, Wesley S; Packard, Gary A; Ryan, Diane M; Burke, C Shawn; Smith, David G; Watola, Daniel J; Pinder, Evette D; Yosick, Todd M; Estrada, Armando X; Crepeau, Loring; Bowles, Stephen V

    2013-11-01

    The military unit is a critical center of gravity in the military's efforts to enhance resilience and the health of the force. The purpose of this article is to augment the military's Total Force Fitness (TFF) guidance with a framework of TFF in units. The framework is based on a Military Demand-Resource model that highlights the dynamic interactions across demands, resources, and outcomes. A joint team of subject-matter experts identified key variables representing unit fitness demands, resources, and outcomes. The resulting framework informs and supports leaders, support agencies, and enterprise efforts to strengthen TFF in units by (1) identifying TFF unit variables aligned with current evidence and operational practices, (2) standardizing communication about TFF in units across the Department of Defense enterprise in a variety of military organizational contexts, (3) improving current resources including evidence-based actions for leaders, (4) identifying and addressing of gaps, and (5) directing future research for enhancing TFF in units. These goals are intended to inform and enhance Service efforts to develop Service-specific TFF models, as well as provide the conceptual foundation for a follow-on article about TFF metrics for units.

  17. Travelling wave expansion: a model fitting approach to the inverse problem of elasticity reconstruction.

    PubMed

    Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph

    2011-08-01

    In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques.

  18. Lévy Flights and Self-Similar Exploratory Behaviour of Termite Workers: Beyond Model Fitting

    PubMed Central

    Miramontes, Octavio; DeSouza, Og; Paiva, Leticia Ribeiro; Marins, Alessandra; Orozco, Sirio

    2014-01-01

    Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties –including Lévy flights– in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale. PMID:25353958

  19. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  20. Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.

    PubMed

    Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis

    2006-02-01

    Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation. PMID:16705973

  1. A healthy fear of the unknown: perspectives on the interpretation of parameter fits from computational models in neuroscience.

    PubMed

    Nassar, Matthew R; Gold, Joshua I

    2013-04-01

    Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.

  2. The Challenges of Fitting an Item Response Theory Model to the Social Anhedonia Scale

    PubMed Central

    Reise, Steven P.; Horan, William P.; Blanchard, Jack J.

    2011-01-01

    This study explored the application of latent variable measurement models to the Social Anhedonia Scale (SAS; Eckblad, Chapman, Chapman, & Mishlove, 1982), a widely used and influential measure in schizophrenia-related research. Specifically, we applied unidimensional and bifactor item response theory (IRT) models to data from a community sample of young adults (n = 2,227). Ordinal factor analyses revealed that identifying a coherent latent structure in the 40-item SAS data was challenging due to: a) the presence of multiple small content clusters (e.g., doublets), b) modest relations between those clusters which, in turn, implies a general factor of only modest strength, c) items that shared little variance with the majority of items, and d) cross-loadings in bifactor solutions. Consequently, we conclude that SAS responses cannot be modeled accurately by either unidimensional or bifactor IRT models. Although the application of a bifactor model to a reduced 17-item set met with better success, significant psychometric and substantive problems remained. Results highlight the challenges of applying latent variable models to scales there were not originally designed to fit these models. PMID:21516580

  3. Observations from using models to fit the gas production of varying volume test cells and landfills.

    PubMed

    Lamborn, Julia

    2012-12-01

    Landfill operators are looking for more accurate models to predict waste degradation and landfill gas production. The simple microbial growth and decay models, whilst being easy to use, have been shown to be inaccurate. Many of the newer and more complex (component) models are highly parameter hungry and many of the required parameters have not been collected or measured at full-scale landfills. This paper compares the results of using different models (LANDGEM, HBM, and two Monod models developed by the author) to fit the gas production of laboratory scale, field test cell and full-scale landfills and discusses some observations that can be made regarding the scalability of gas generation rates. The comparison of these results show that the fast degradation rate that occurs at laboratory scale is not replicated at field-test cell and full-scale landfills. At small scale, all the models predict a slower rate of gas generation than actually occurs. At field test cell and full-scale a number of models predict a faster gas generation than actually occurs. Areas for future work have been identified, which include investigations into the capture efficiency of gas extraction systems and into the parameter sensitivity and identification of the critical parameters for field-test cell and full-scale landfill predication.

  4. The challenges of fitting an item response theory model to the Social Anhedonia Scale.

    PubMed

    Reise, Steven P; Horan, William P; Blanchard, Jack J

    2011-05-01

    This study explored the application of latent variable measurement models to the Social Anhedonia Scale (SAS; Eckblad, Chapman, Chapman, & Mishlove, 1982), a widely used and influential measure in schizophrenia-related research. Specifically, we applied unidimensional and bifactor item response theory (IRT) models to data from a community sample of young adults (n = 2,227). Ordinal factor analyses revealed that identifying a coherent latent structure in the 40-item SAS data was challenging due to (a) the presence of multiple small content clusters (e.g., doublets); (b) modest relations between those clusters, which, in turn, implies a general factor of only modest strength; (c) items that shared little variance with the majority of items; and (d) cross-loadings in bifactor solutions. Consequently, we conclude that SAS responses cannot be modeled accurately by either unidimensional or bifactor IRT models. Although the application of a bifactor model to a reduced 17-item set met with better success, significant psychometric and substantive problems remained. Results highlight the challenges of applying latent variable models to scales that were not originally designed to fit these models.

  5. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    PubMed

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers.

  6. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging

    PubMed Central

    Alipoor, Mohammad; Maier, Stephan E.; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters. PMID:26839880

  7. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging.

    PubMed

    Alipoor, Mohammad; Maier, Stephan E; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.

  8. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  9. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles.

    PubMed

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-03-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  10. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  11. Spectral observations of Ellerman bombs and fitting with a two-cloud model

    SciTech Connect

    Hong, Jie; Ding, M. D.; Li, Ying; Fang, Cheng; Cao, Wenda

    2014-09-01

    We study the Hα and Ca II 8542 Å line spectra of four typical Ellerman bombs (EBs) in the active region NOAA 11765 on 2013 June 6, observed with the Fast Imaging Solar Spectrograph installed at the 1.6 m New Solar Telescope at Big Bear Solar Observatory. Considering that EBs may occur in a restricted region in the lower atmosphere, and that their spectral lines show particular features, we propose a two-cloud model to fit the observed line profiles. The lower cloud can account for the wing emission, and the upper cloud is mainly responsible for the absorption at line center. After choosing carefully the free parameters, we get satisfactory fitting results. As expected, the lower cloud shows an increase of the source function, corresponding to a temperature increase of 400-1000 K in EBs relative to the quiet Sun. This is consistent with previous results deduced from semi-empirical models and confirms that local heating occurs in the lower atmosphere during the appearance of EBs. We also find that the optical depths can increase to some extent in both the lower and upper clouds, which may result from either direct heating in the lower cloud, or illumination by an enhanced radiation on the upper cloud. The velocities derived from this method, however, are different from those obtained using the traditional bisector method, implying that one should be cautious when interpreting this parameter. The two-cloud model can thus be used as an efficient method to deduce the basic physical parameters of EBs.

  12. Quantitative modeling of virus evolutionary dynamics and adaptation in serial passages using empirically inferred fitness landscapes.

    PubMed

    Woo, Hyung Jun; Reifman, Jaques

    2014-01-01

    We describe a stochastic virus evolution model representing genomic diversification and within-host selection during experimental serial passages under cell culture or live-host conditions. The model incorporates realistic descriptions of the virus genotypes in nucleotide and amino acid sequence spaces, as well as their diversification from error-prone replications. It quantitatively considers factors such as target cell number, bottleneck size, passage period, infection and cell death rates, and the replication rate of different genotypes, allowing for systematic examinations of how their changes affect the evolutionary dynamics of viruses during passages. The relative probability for a viral population to achieve adaptation under a new host environment, quantified by the rate with which a target sequence frequency rises above 50%, was found to be most sensitive to factors related to sequence structure (distance from the wild type to the target) and selection strength (host cell number and bottleneck size). For parameter values representative of RNA viruses, the likelihood of observing adaptations during passages became negligible as the required number of mutations rose above two amino acid sites. We modeled the specific adaptation process of influenza A H5N1 viruses in mammalian hosts by simulating the evolutionary dynamics of H5 strains under the fitness landscape inferred from multiple sequence alignments of H3 proteins. In light of comparisons with experimental findings, we observed that the evolutionary dynamics of adaptation is strongly affected not only by the tendency toward increasing fitness values but also by the accessibility of pathways between genotypes constrained by the genetic code.

  13. ANALYTICAL LIGHT CURVE MODELS OF SUPERLUMINOUS SUPERNOVAE: {chi}{sup 2}-MINIMIZATION OF PARAMETER FITS

    SciTech Connect

    Chatzopoulos, E.; Wheeler, J. Craig; Vinko, J.; Horvath, Z. L.; Nagy, A.

    2013-08-10

    We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including {sup 56}Ni and {sup 56}Co radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich superluminous supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx, and SN 2010kd, as well as to the interacting SN 2008iy and PTF 09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to the LCs of most of the events we examine. We discuss the possibility that collision of supernova ejecta with hydrogen-deficient CSM accounts for some of the hydrogen-deficient SLSNe (SLSN-I) and may be a plausible explanation for the explosion mechanism of SN 2007bi, the pair-instability supernova candidate. We characterize and discuss issues of parameter degeneracy.

  14. Statistics of dark matter substructure - I. Model and universal fitting functions

    NASA Astrophysics Data System (ADS)

    Jiang, Fangzhou; van den Bosch, Frank C.

    2016-05-01

    We present a new, semi-analytical model describing the evolution of dark matter subhaloes. The model uses merger trees constructed using the method of Parkinson et al. to describe the masses and redshifts of subhaloes at accretion, which are subsequently evolved using a simple model for the orbit-averaged mass-loss rates. The model is extremely fast, treats subhaloes of all orders, accounts for scatter in orbital properties and halo concentrations, uses a simple recipe to convert subhalo mass to maximum circular velocity, and considers subhalo disruption. The model is calibrated to accurately reproduce the average subhalo mass and velocity functions in numerical simulations. We demonstrate that, on average, the mass fraction in subhaloes is tightly correlated with the `dynamical age' of the host halo, defined as the number of halo dynamical times that have elapsed since its formation. Using this relation, we present universal fitting functions for the evolved and unevolved subhalo mass and velocity functions that are valid for a broad range in host halo mass, redshift and Λ cold dark matter cosmology.

  15. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  16. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  17. Ploidy frequencies in plants with ploidy heterogeneity: fitting a general gametic model to empirical population data

    PubMed Central

    Suda, Jan; Herben, Tomáš

    2013-01-01

    Genome duplication (polyploidy) is a recurrent evolutionary process in plants, often conferring instant reproductive isolation and thus potentially leading to speciation. Outcome of the process is often seen in the field as different cytotypes co-occur in many plant populations. Failure of meiotic reduction during gametogenesis is widely acknowledged to be the main mode of polyploid formation. To get insight into its role in the dynamics of polyploidy generation under natural conditions, and coexistence of several ploidy levels, we developed a general gametic model for diploid–polyploid systems. This model predicts equilibrium ploidy frequencies as functions of several parameters, namely the unreduced gamete proportions and fertilities of higher ploidy plants. We used data on field ploidy frequencies for 39 presumably autopolyploid plant species/populations to infer numerical values of the model parameters (either analytically or using an optimization procedure). With the exception of a few species, the model fit was very high. The estimated proportions of unreduced gametes (median of 0.0089) matched published estimates well. Our results imply that conditions for cytotype coexistence in natural populations are likely to be less restrictive than previously assumed. In addition, rather simple models show sufficiently rich behaviour to explain the prevalence of polyploids among flowering plants. PMID:23193129

  18. Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach

    SciTech Connect

    Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.

    2014-01-21

    The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq{sub 3} based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.

  19. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  20. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    PubMed

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130

  1. A new fit-for-purpose model testing framework: Decision Crash Tests

    NASA Astrophysics Data System (ADS)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  2. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters.

    PubMed

    Hui, Kerwin; Chai, Jeng-Da

    2016-01-28

    By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems. PMID:26827209

  3. Strain estimation in 3D by fitting linear and planar data to the March model

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Talbot, Christopher J.

    2016-08-01

    The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.

  4. A 3D boundary-fitted barotropic hydrodynamic model for the New York Harbor region

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, S.

    2005-11-01

    A three-dimensional barotropic hydrodynamic model application to the New York Harbor Region is performed using the Boundary-Fitted HYDROdynamic model (BFHYDRO). The model forcing functions consist of surface elevations along the open boundaries, hourly winds, and fresh water flows from the rivers and sewage flows. A comprehensive skill assessment of the model predictions is done using observed surface elevations and three-dimensional currents. The model-predicted surface elevations compare well with the observed surface elevations at four stations. Mean errors in the model-predicted surface elevations are less than 4% and correlation coefficients exceed 0.985. Model-predicted three-dimensional currents at Verrazano Narrows show excellent comparison with the observations, with mean errors less than 11% and correlation coefficients exceeding 0.960. Model-predicted three-dimensional currents at Bergen Point compare well with the observations, with mean errors less than 15% and correlation coefficients exceeding 0.897. The surface elevation amplitudes and phases of the principal tidal constituents at nine tidal stations, obtained from a harmonic analysis of a 60-day simulation compare well with the observed data. The predicted amplitude and phase of the M2 tidal constituent at these stations are, respectively, within 5 cm and 6° of the observed data. The model-predicted tidal ellipse parameters for the major tidal constituents compare well with the observations at Verrazano Narrows and Bergen Point. The model-predicted along channel sub-tidal currents also compare well with the observations. The semi-diurnal tidal ranges and spring and neap tidal cycles of the surface elevations and currents are well reproduced in the model at all stations. The observed currents at Bergen Point were shown to be flood dominant through tidal distortion analysis. The model-predicted currents also showed Newark Bay and Arthur Kill to be flood dominant systems. The model predictions showed

  5. Molecular mechanisms of protein aggregation from global fitting of kinetic models.

    PubMed

    Meisl, Georg; Kirkegaard, Julius B; Arosio, Paolo; Michaels, Thomas C T; Vendruscolo, Michele; Dobson, Christopher M; Linse, Sara; Knowles, Tuomas P J

    2016-02-01

    The elucidation of the molecular mechanisms by which soluble proteins convert into their amyloid forms is a fundamental prerequisite for understanding and controlling disorders that are linked to protein aggregation, such as Alzheimer's and Parkinson's diseases. However, because of the complexity associated with aggregation reaction networks, the analysis of kinetic data of protein aggregation to obtain the underlying mechanisms represents a complex task. Here we describe a framework, using quantitative kinetic assays and global fitting, to determine and to verify a molecular mechanism for aggregation reactions that is compatible with experimental kinetic data. We implement this approach in a web-based software, AmyloFit. Our procedure starts from the results of kinetic experiments that measure the concentration of aggregate mass as a function of time. We illustrate the approach with results from the aggregation of the β-amyloid (Aβ) peptides measured using thioflavin T, but the method is suitable for data from any similar kinetic experiment measuring the accumulation of aggregate mass as a function of time; the input data are in the form of a tab-separated text file. We also outline general experimental strategies and practical considerations for obtaining kinetic data of sufficient quality to draw detailed mechanistic conclusions, and the procedure starts with instructions for extensive data quality control. For the core part of the analysis, we provide an online platform (http://www.amylofit.ch.cam.ac.uk) that enables robust global analysis of kinetic data without the need for extensive programming or detailed mathematical knowledge. The software automates repetitive tasks and guides users through the key steps of kinetic analysis: determination of constraints to be placed on the aggregation mechanism based on the concentration dependence of the aggregation reaction, choosing from several fundamental models describing assembly into linear aggregates and

  6. A Cautionary Note on Using G[squared](dif) to Assess Relative Model Fit in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert; Cai, Li

    2006-01-01

    The likelihood ratio test statistic G[squared](dif) is widely used for comparing the fit of nested models in categorical data analysis. In large samples, this statistic is distributed as a chi-square with degrees of freedom equal to the difference in degrees of freedom between the tested models, but only if the least restrictive model is correctly…

  7. Applying the Bollen-Stine Bootstrap for Goodness-of-Fit Measures to Structural Equation Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2002-01-01

    Proposed a method for extending the Bollen-Stine bootstrap model (K. Bollen and R. Stine, 1992) fit to structural equation models with missing data. Developed a Statistical Analysis System macro program to implement this procedure, and assessed its usefulness in a simulation. The new method yielded model rejection rates close to the nominal 5%…

  8. On the Model-Based Bootstrap with Missing Data: Obtaining a "P"-Value for a Test of Exact Fit

    ERIC Educational Resources Information Center

    Savalei, Victoria; Yuan, Ke-Hai

    2009-01-01

    Evaluating the fit of a structural equation model via bootstrap requires a transformation of the data so that the null hypothesis holds exactly in the sample. For complete data, such a transformation was proposed by Beran and Srivastava (1985) for general covariance structure models and applied to structural equation modeling by Bollen and Stine…

  9. The Kunming CalFit study: modeling dietary behavioral patterns using smartphone data.

    PubMed

    Seto, Edmund; Hua, Jenna; Wu, Lemuel; Bestick, Aaron; Shia, Victor; Eom, Sue; Han, Jay; Wang, May; Li, Yan

    2014-01-01

    Human behavioral interventions aimed at improving health can benefit from objective wearable sensor data and mathematical models. Smartphone-based sensing is particularly practical for monitoring behavioral patterns because smartphones are fairly common, are carried by individuals throughout their daily lives, offer a variety of sensing modalities, and can facilitate various forms of user feedback for intervention studies. We describe our findings from a smartphone-based study, in which an Android-based application we developed called CalFit was used to collect information related to young adults' dietary behaviors. In addition to monitoring dietary patterns, we were interested in understanding contextual factors related to when and where an individual eats, as well as how their dietary intake relates to physical activity (which creates energy demand) and psychosocial stress. 12 participants were asked to use CalFit to record videos of their meals over two 1-week periods, which were translated into nutrient intake by trained dietitians. During this same period, triaxial accelerometry was used to assess each subject's energy expenditure, and GPS was used to record time-location patterns. Ecological momentary assessment was also used to prompt subjects to respond to questions on their phone about their psychological state. The GPS data were processed through a web service we developed called Foodscoremap that is based on the Google Places API to characterize food environments that subjects were exposed to, which may explain and influence dietary patterns. Furthermore, we describe a modeling framework that incorporates all of these information to dynamically infer behavioral patterns that may be used for future intervention studies. PMID:25571578

  10. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    NASA Astrophysics Data System (ADS)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  11. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    PubMed

    Guillera-Arroita, Gurutzeta; Lahoz-Monfort, José J; MacKenzie, Darryl I; Wintle, Brendan A; McCarthy, Michael A

    2014-01-01

    In a recent paper, Welsh, Lindenmayer and Donnelly (WLD) question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  12. Model inversion by parameter fit using NN emulating the forward model: evaluation of indirect measurements.

    PubMed

    Schiller, Helmut

    2007-05-01

    The usage of inverse models to derive parameters of interest from measurements is widespread in science and technology. The operational usage of many inverse models became feasible just by emulation of the inverse model via a neural net (NN). This paper shows how NNs can be used to improve inversion accuracy by minimizing the sum of error squares. The procedure is very fast as it takes advantage of the Jacobian which is a byproduct of the NN calculation. An example from remote sensing is shown. It is also possible to take into account a non-diagonal covariance matrix of the measurement to derive the covariance matrix of the retrieved parameters.

  13. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  14. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  15. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking.

  16. Modeling and fitting protein-protein complexes to predict change of binding energy

    PubMed Central

    Dourado, Daniel F.A.R.; Flores, Samuel Coulbourn

    2016-01-01

    It is possible to accurately and economically predict change in protein-protein interaction energy upon mutation (ΔΔG), when a high-resolution structure of the complex is available. This is of growing usefulness for design of high-affinity or otherwise modified binding proteins for therapeutic, diagnostic, industrial, and basic science applications. Recently the field has begun to pursue ΔΔG prediction for homology modeled complexes, but so far this has worked mostly for cases of high sequence identity. If the interacting proteins have been crystallized in free (uncomplexed) form, in a majority of cases it is possible to find a structurally similar complex which can be used as the basis for template-based modeling. We describe how to use MMB to create such models, and then use them to predict ΔΔG, using a dataset consisting of free target structures, co-crystallized template complexes with sequence identify with respect to the targets as low as 44%, and experimental ΔΔG measurements. We obtain similar results by fitting to a low-resolution Cryo-EM density map. Results suggest that other structural constraints may lead to a similar outcome, making the method even more broadly applicable. PMID:27173910

  17. A simple periodic-forced model for dengue fitted to incidence data in Singapore.

    PubMed

    Andraud, Mathieu; Hens, Niel; Beutels, Philippe

    2013-07-01

    Dengue is the world's major arbovirosis and therefore an important public health concern in endemic areas. The availability of weekly reports of dengue cases in Singapore offers the opportunity to analyze the transmission dynamics and the impact of vector control strategies. Based on a previous model studying the impact of vector control strategies in Singapore during the 2005 outbreak, a simple vector-host model accounting for seasonal fluctuation in vector density was developed to estimate the parameters governing the vector population dynamics using dengue fever incidence data from August 2003 to December 2007. The impact of vector control, which consisted principally of a systematic removal of actual and potential breeding sites during a six-week period in 2005, was also investigated. Although our approach does not account for the complex life cycle of the vector, the good fit between data and model outputs showed that the impact of seasonality on the transmission dynamics is highly important. Moreover, the periodic fluctuations of the vector population were found in phase with temperature variations, suggesting a strong climate effect on the vector density and, in turn, on the transmission dynamics.

  18. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  19. Disentangling effects of induced plant defenses and food quantity on herbivores by fitting nonlinear models.

    PubMed

    Morris, W F

    1997-09-01

    Plants can respond to herbivore damage through both broad-scale (systemic) and localized induced responses. While many studies have quantified the impact of systemic responses on herbivores, measuring the impact of localized changes is difficult because plant tissues that have suffered direct damage may represent both a lower quality and a lower quantity of food. This article uses nonlinear models to disentangle the confounding effects of prior herbivory on food quantity and quality. The first (null) model assumes that herbivore performance is determined only by the quantity of food available to an average herbivore. Modified models allow two distinct effects of damage-induced defenses: an increase in the amount of food each herbivore is required to consume in order to achieve maximum performance and a reduction in the maximum performance even when herbivores are fed ad lib. Maximum likelihood methods were used to fit the models to data from field experiments in which Colorado potato beetle (Leptinotarsa decemlineata) larvae were reared on three varieties of potatoes that had been damaged to varying degrees by adult beetles. Prior damage reduced the mean mass of beetles at pupation, and this effect was due to both a decrease in food quantity and induced changes in food quality. In contrast, beetle survival was affected in some cases by reduced food quantity but showed no responses that could be attributed to induced defenses. I discuss this result in the context of previous studies of induced (mostly systemic) responses in the potato-potato beetle system, and I suggest that detailed studies of particular chemical responses and the proposed method of combining bioassays with quantitative models should be used as complementary approaches in future studies of herbivore-induced defenses in plants.

  20. Nonradial p-modes in the G9.5 giant ɛ Ophiuchi? Pulsation model fits to MOST photometry

    NASA Astrophysics Data System (ADS)

    Kallinger, T.; Guenther, D. B.; Matthews, J. M.; Weiss, W. W.; Huber, D.; Kuschnig, R.; Moffat, A. F. J.; Rucinski, S. M.; Sasselov, D.

    2008-02-01

    The G9.5 giant ɛ Oph shows evidence of radial p-mode pulsations in both radial velocity and luminosity. We re-examine the observed frequencies in the photometry and radial velocities and find a best model fit to 18 of the 21 most significant photometric frequencies. The observed frequencies are matched to both radial and nonradial modes in the best model fit. The small scatter of the frequencies about the model predicted frequencies indicate that the average lifetimes of the modes could be as long as 10-20 d. The best fit model itself, constrained only by the observed frequencies, lies within ±1σ of ɛ Oph's position in the HR-diagram and the interferometrically determined radius. Based on data from the MOST satellite, a Canadian Space Agency mission jointly operated by Dynacon, Inc., the University of Toronto Institute of Aerospace Studies, and the University of British Columbia, with assistance from the University of Vienna, Austria.

  1. Multiple linear regression models to fit magnitude using rupture length, rupture width, rupture area, and surface displacement

    NASA Astrophysics Data System (ADS)

    Chu, A.; Zhuang, J.

    2015-12-01

    Wells and Coppersmith (1994) have used fault data to fit simple linear regression (SLR) models to explain linear relations between moment magnitude and logarithms of fault measurements such as rupture length, rupture width, rupture area and surface displacement. Our work extends their analyses to multiple linear regression (MLR) models by considering two or more predictors with updated data. Treating the quantitative variables (rupture length, rupture width, rupture area and surface displacement) as predictors to fit linear regression models on magnitude, we have discovered that the two-predictor model using rupture area and maximum displacement fits the best. The next best alternative predictors are surface length and rupture area. Neither slip type nor slip direction is a significant predictor by fitting of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) models. Corrected Akaike information criterion (Burnham and Anderson, 2002) is used as a model assessment criterion. Comparisons between simple linear regression models of Wells and Coppersmith (1994) and our multiple linear regression models are presented. Our work is done using fault data from Wells and Coppersmith (1994) and new data from Ellswort (2000), Hanks and Bakun (2002, 2008), Shaw (2013), and Finite-Source Rupture Model Database (http://equake-rc.info/SRCMOD/, 2015).

  2. On Eigen's Quasispecies Model, Two-Valued Fitness Landscapes, and Isometry Groups Acting on Finite Metric Spaces.

    PubMed

    Semenov, Yuri S; Novozhilov, Artem S

    2016-05-01

    A two-valued fitness landscape is introduced for the classical Eigen's quasispecies model. This fitness landscape can be considered as a direct generalization of the so-called single- or sharply peaked landscape. A general, non-permutation invariant quasispecies model is studied, and therefore the dimension of the problem is [Formula: see text], where N is the sequence length. It is shown that if the fitness function is equal to [Formula: see text] on a G-orbit A and is equal to w elsewhere, then the mean population fitness can be found as the largest root of an algebraic equation of degree at most [Formula: see text]. Here G is an arbitrary isometry group acting on the metric space of sequences of zeroes and ones of the length N with the Hamming distance. An explicit form of this exact algebraic equation is given in terms of the spherical growth function of the G-orbit A. Motivated by the analysis of the two-valued fitness landscapes, an abstract generalization of Eigen's model is introduced such that the sequences are identified with the points of a finite metric space X together with a group of isometries acting transitively on X. In particular, a simplicial analog of the original quasispecies model is discussed, which can be considered as a mathematical model of the switching of the antigenic variants for some bacteria. PMID:27230609

  3. Evapotranspiration measurement and modeling without fitting parameters in high-altitude grasslands

    NASA Astrophysics Data System (ADS)

    Ferraris, Stefano; Previati, Maurizio; Canone, Davide; Dematteis, Niccolò; Boetti, Marco; Balocco, Jacopo; Bechis, Stefano

    2016-04-01

    Mountain grasslands are important, also because one sixth of the world population lives inside watershed dominated by snowmelt. Also, grasslands provide food to both domestic and selvatic animals. The global warming will probably accelerate the hydrological cycle and increase the drought risk. The combination of measurements, modeling and remote sensing can furnish knowledge in such faraway areas (e.g.: Brocca et al., 2013). A better knowledge of water balance can also allow to optimize the irrigation (e.g.: Canone et al., 2015). This work is meant to build a model of water balance in mountain grasslands, ranging between 1500 and 2300 meters asl. The main input is the Digital Terrain Model, which is more reliable in grasslands than both in the woods and in the built environment. It drives the spatial variability of shortwave solar radiation. The other atmospheric forcings are more problematic to estimate, namely air temperature, wind and longwave radiation. Ad hoc routines have been written, in order to interpolate in space the meteorological hourly time variability. The soil hydraulic properties are less variable than in the plains, but the soil depth estimation is still an open issue. The soil vertical variability has been modeled taking into account the main processes: soil evaporation, root uptake, and fractured bedrock percolation. The time variability latent heat flux and soil moisture results have been compared with the data measured in an eddy covariance station. The results are very good, given the fact that the model has no fitting parameters. The space variability results have been compared with the results of a model based on Landsat 7 and 8 data, applied over an area of about 200 square kilometers. The spatial correlation is quite in agreement between the two models. Brocca et al. (2013). "Soil moisture estimation in alpine catchments through modelling and satellite observations". Vadose Zone Journal, 12(3), 10 pp. Canone et al. (2015). "Field

  4. Model Order Selection for Short Data: An Exponential Fitting Test (EFT)

    NASA Astrophysics Data System (ADS)

    Quinlan, Angela; Barbot, Jean-Pierre; Larzabal, Pascal; Haardt, Martin

    2006-12-01

    High-resolution methods for estimating signal processing parameters such as bearing angles in array processing or frequencies in spectral analysis may be hampered by the model order if poorly selected. As classical model order selection methods fail when the number of snapshots available is small, this paper proposes a method for noncoherent sources, which continues to work under such conditions, while maintaining low computational complexity. For white Gaussian noise and short data we show that the profile of the ordered noise eigenvalues is seen to approximately fit an exponential law. This fact is used to provide a recursive algorithm which detects a mismatch between the observed eigenvalue profile and the theoretical noise-only eigenvalue profile, as such a mismatch indicates the presence of a source. Moreover this proposed method allows the probability of false alarm to be controlled and predefined, which is a crucial point for systems such as RADARs. Results of simulations are provided in order to show the capabilities of the algorithm.

  5. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    PubMed

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. PMID:26144945

  6. Using Item Mean Squares To Evaluate Fit to the Rasch Model.

    ERIC Educational Resources Information Center

    Smith, Richard M.; And Others

    In the mid to late 1970s, considerable research was conducted on the properties of Rasch fit mean squares, resulting in transformations to convert the mean squares into approximate t-statistics. In the late 1980s and the early 1990s, the trend seems to have reversed, with numerous researchers using the untransformed fit mean squares as a means of…

  7. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    PubMed

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. PMID:25038234

  8. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    SciTech Connect

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  9. Do We Need Multiple Models of Auditory Verbal Hallucinations? Examining the Phenomenological Fit of Cognitive and Neurological Models

    PubMed Central

    Jones, Simon R.

    2010-01-01

    The causes of auditory verbal hallucinations (AVHs) are still unclear. The evidence for 2 prominent cognitive models of AVHs, one based on inner speech, the other on intrusions from memory, is briefly reviewed. The fit of these models, as well as neurological models, to the phenomenology of AVHs is then critically examined. It is argued that only a minority of AVHs, such as those with content clearly relating to verbalizations experienced surrounding previous trauma, are consistent with cognitive AVHs-as-memories models. Similarly, it is argued that current neurological models are only phenomenologically consistent with a limited subset of AVHs. In contrast, the phenomenology of the majority of AVHs, which involve voices attempting to regulate the ongoing actions of the voice hearer, are argued to be more consistent with inner speech–based models. It is concluded that subcategorizations of AVHs may be necessary, with each underpinned by different neurocognitive mechanisms. The need to study what is termed the dynamic developmental progression of AVHs is also highlighted. Future empirical research is suggested in this area. PMID:18820262

  10. Modeling, Simulation and Data Fitting of the Charge Injected Diodes (CID) for SLHC Tracking Applications

    SciTech Connect

    Li, Z.; Eremin, V.; Harkonen, J.; Luukka, P.; Tuominen, E.; Tuovinen, E.; Verbitskaya, E.

    2009-10-27

    Modeling and simulations have been performed for the charge injected diodes (CID) for the application in SLHC. MIP-induced current and charges have been calculated for segmented detectors with various radiation fluences, up to the highest SLHC fluence of 1 x 10{sup 16} n{sub eq}/cm{sup 2}. Although the main advantage of CID detectors is their virtual full depletion at any radiation fluence at a modest bias voltage (<600 V), the simulation of CID and fitting to the existing data have shown that the CID operation mode also reduces the free carrier trapping, resulting in a much higher charge collection at the SLHC fluence than that in a standard Si detector. The reduction in free carrier trapping by almost one order of magnitude is due to the fact that the CID mode also pre-fills the traps, making them neutral and not active in trapping. It has been found that, electron traps can be pre-filled by injection of electrons from the n{sup +} contact, and hole traps can be pre-filled by injection of holes from the p{sup +} contact. The CID mode of detector operation can be achieved by a modestly low temperature of around -40 C, achievable by the proposed CO{sub 2} cooling for detector upgrades in SLHC. High charge collection comparable to the 3D electrode Si detectors makes the CID Si detector a valuable alternative for SLHC detectors for its much easier fabrication process.

  11. Regulation of Neutrophil Degranulation and Cytokine Secretion: A Novel Model Approach Based on Linear Fitting

    PubMed Central

    Naegelen, Isabelle; Beaume, Nicolas; Plançon, Sébastien; Schenten, Véronique; Tschirhart, Eric J.; Bréchard, Sabrina

    2015-01-01

    Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i) define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii) investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes. PMID:26579547

  12. Facultative control of matrix production optimizes competitive fitness in Pseudomonas aeruginosa PA14 biofilm models.

    PubMed

    Madsen, Jonas S; Lin, Yu-Cheng; Squyres, Georgia R; Price-Whelan, Alexa; de Santiago Torio, Ana; Song, Angela; Cornell, William C; Sørensen, Søren J; Xavier, Joao B; Dietrich, Lars E P

    2015-12-01

    As biofilms grow, resident cells inevitably face the challenge of resource limitation. In the opportunistic pathogen Pseudomonas aeruginosa PA14, electron acceptor availability affects matrix production and, as a result, biofilm morphogenesis. The secreted matrix polysaccharide Pel is required for pellicle formation and for colony wrinkling, two activities that promote access to O2. We examined the exploitability and evolvability of Pel production at the air-liquid interface (during pellicle formation) and on solid surfaces (during colony formation). Although Pel contributes to the developmental response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure formation in response to electron acceptor limitation is unique to specific biofilm models and that the facultative control of Pel production is required for PA14 to maintain optimum benefit in different types of communities.

  13. Asbestos/NESHAP adequately wet guidance

    SciTech Connect

    Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.

    1990-12-01

    The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.

  14. Model selection and validation of extreme distribution by goodness-of-fit test based on conditional position

    NASA Astrophysics Data System (ADS)

    Abidin, Nahdiya Zainal; Adam, Mohd Bakri

    2014-09-01

    In Extreme Value Theory, the important aspect of model extrapolation is to model the extreme behavior. This is because the choice of the extreme value distribution model affects the prediction that is about to be made. Thus, model validation which is called Goodness-of-fit (GoF) test is necessary. In this study, the GoF tests were used to fit the Generalized Extreme Value (GEV) Type-II model against the simulated observed values. The μ, σ and ξ were estimated by Maximum Likelihood. The critical values based on conditional points were developed by Monte-Carlo simulation. The powers of the tests were identified by power study. The data that is distributed according to GEV Type-II distribution was used to test whether the critical values developed are able to confirm the fit between GEV Type-II model and the data. To confirm the fit, the statistics value of the GOF test should be smaller than the critical value.

  15. Fitting hidden Markov models of protein domains to a target species: application to Plasmodium falciparum

    PubMed Central

    2012-01-01

    Background Hidden Markov Models (HMMs) are a powerful tool for protein domain identification. The Pfam database notably provides a large collection of HMMs which are widely used for the annotation of proteins in new sequenced organisms. In Pfam, each domain family is represented by a curated multiple sequence alignment from which a profile HMM is built. In spite of their high specificity, HMMs may lack sensitivity when searching for domains in divergent organisms. This is particularly the case for species with a biased amino-acid composition, such as P. falciparum, the main causal agent of human malaria. In this context, fitting HMMs to the specificities of the target proteome can help identify additional domains. Results Using P. falciparum as an example, we compare approaches that have been proposed for this problem, and present two alternative methods. Because previous attempts strongly rely on known domain occurrences in the target species or its close relatives, they mainly improve the detection of domains which belong to already identified families. Our methods learn global correction rules that adjust amino-acid distributions associated with the match states of HMMs. These rules are applied to all match states of the whole HMM library, thus enabling the detection of domains from previously absent families. Additionally, we propose a procedure to estimate the proportion of false positives among the newly discovered domains. Starting with the Pfam standard library, we build several new libraries with the different HMM-fitting approaches. These libraries are first used to detect new domain occurrences with low E-values. Second, by applying the Co-Occurrence Domain Discovery (CODD) procedure we have recently proposed, the libraries are further used to identify likely occurrences among potential domains with higher E-values. Conclusion We show that the new approaches allow identification of several domain families previously absent in the P. falciparum proteome

  16. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    NASA Technical Reports Server (NTRS)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  17. The fitting of general force-of-infection models to wildlife disease prevalence data

    USGS Publications Warehouse

    Heisey, D.M.; Joly, D.O.; Messier, F.

    2006-01-01

    Researchers and wildlife managers increasingly find themselves in situations where they must deal with infectious wildlife diseases such as chronic wasting disease, brucellosis, tuberculosis, and West Nile virus. Managers are often charged with designing and implementing control strategies, and researchers often seek to determine factors that influence and control the disease process. All of these activities require the ability to measure some indication of a disease's foothold in a population and evaluate factors affecting that foothold. The most common type of data available to managers and researchers is apparent prevalence data. Apparent disease prevalence, the proportion of animals in a sample that are positive for the disease, might seem like a natural measure of disease's foothold, but several properties, in particular, its dependency on age structure and the biasing effects of disease-associated mortality, make it less than ideal. In quantitative epidemiology, the a??force of infection,a?? or infection hazard, is generally the preferred parameter for measuring a disease's foothold, and it can be viewed as the most appropriate way to a??adjusta?? apparent prevalence for age structure. The typical ecology curriculum includes little exposure to quantitative epidemiological concepts such as cumulative incidence, apparent prevalence, and the force of infection. The goal of this paper is to present these basic epidemiological concepts and resulting models in an ecological context and to illustrate how they can be applied to understand and address basic epidemiological questions. We demonstrate a practical approach to solving the heretofore intractable problem of fitting general force-of-infection models to wildlife prevalence data using a generalized regression approach. We apply the procedures to Mycobacterium bovis (bovine tuberculosis) prevalence in bison (Bison bison) in Wood Buffalo National Park, Canada, and demonstrate strong age dependency in the force of

  18. Flowering genes in Metrosideros fit a broad herbaceous model encompassing Arabidopsis and Antirrhinum.

    PubMed

    Sreekantan, Lekha; Clemens, John; McKenzie, Marian J.; Lenton, John R.; Croker, Steve J.; Jameson, Paula E.

    2004-05-01

    Molecular studies were conducted on Metrosideros excelsa to determine if the current genetic models for flowering with regard to inflorescence and floral meristem identity genes in annual plants were applicable to a woody perennial. MEL, MESAP1 and METFL1, the fragments of LEAFY (LFY), APETALA1 (AP1) and TERMINAL FLOWER1 (TFL1) equivalents, respectively, were isolated from M. excelsa. Temporal expression patterns showed that MEL and MESAP1 exhibited a bimodal pattern of expression. Expression exhibited during early floral initiation in autumn was followed by down-regulation during winter, and up-regulation in spring as floral organogenesis occurred. Spatial expression patterns of MEL showed that it had greater similarity to FLORICAULA (FLO) than to LFY, whereas MESAP1 was more similar to AP1 than SQUAMOSA. The interaction between MEL and METFL1 was more similar to the interaction between FLO and CENTRORADIALIS than that between LFY and TFL1. Consequently, the three genes from M. excelsa fit a broader herbaceous model encompassing Antirrhinum as well as Arabidopsis, but with differences, such as the bimodal pattern of expression seen with MEL and MESAP1. In mid-winter, at the time when both MEL and MESAP1 were down-regulated, GA(1) was below the level of detection in M. excelsa buds. Even though application of gibberellin inhibits flowering in members of the Myrtaceae, MEL was responsive to gibberellin with expression in juvenile plants up-regulated by GA(3). However, MESAP1 was not up-regulated indicating that meristem competence was also probably required to promote flowering in M. excelsa. PMID:15086830

  19. The fitting of general force-of-infection models to wildlife disease prevalence data.

    PubMed

    Heisey, Dennis M; Joly, Damien O; Messier, François

    2006-09-01

    Researchers and wildlife managers increasingly find themselves in situations where they must deal with infectious wildlife diseases such as chronic wasting disease, brucellosis, tuberculosis, and West Nile virus. Managers are often charged with designing and implementing control strategies, and researchers often seek to determine factors that influence and control the disease process. All of these activities require the ability to measure some indication of a disease's foothold in a population and evaluate factors affecting that foothold. The most common type of data available to managers and researchers is apparent prevalence data. Apparent disease prevalence, the proportion of animals in a sample that are positive for the disease, might seem like a natural measure of disease's foothold, but several properties, in particular, its dependency on age structure and the biasing effects of disease-associated mortality, make it less than ideal. In quantitative epidemiology, the "force of infection," or infection hazard, is generally the preferred parameter for measuring a disease's foothold, and it can be viewed as the most appropriate way to "adjust" apparent prevalence for age structure. The typical ecology curriculum includes little exposure to quantitative epidemiological concepts such as cumulative incidence, apparent prevalence, and the force of infection. The goal of this paper is to present these basic epidemiological concepts and resulting models in an ecological context and to illustrate how they can be applied to understand and address basic epidemiological questions. We demonstrate a practical approach to solving the heretofore intractable problem of fitting general force-of-infection models to wildlife prevalence data using a generalized regression approach. We apply the procedures to Mycobacterium bovis (bovine tuberculosis) prevalence in bison (Bison bison) in Wood Buffalo National Park, Canada, and demonstrate strong age dependency in the force of

  20. Development of a Stellar Model-Fitting Pipeline for Asteroseismic Data from the TESS Mission

    NASA Astrophysics Data System (ADS)

    Metcalfe, Travis

    The launch of NASA's Kepler space telescope in 2009 revolutionized the quality and quantity of observational data available for asteroseismic analysis. Prior to the Kepler mission, solar-like oscillations were extremely difficult to observe, and data only existed for a handful of the brightest stars in the sky. With the necessity of studying one star at a time, the traditional approach to extracting the physical properties of the star from the observations was an uncomfortably subjective process. A variety of experts could use similar tools but come up with significantly different answers. Not only did this subjectivity have the potential to undermine the credibility of the technique, it also hindered the compilation of a uniform sample that could be used to draw broader physical conclusions from the ensemble of results. During a previous award from NASA, we addressed these issues by developing an automated and objective stellar model-fitting pipeline for Kepler data, and making it available through the Asteroseismic Modeling Portal (AMP). This community modeling tool has allowed us to derive reliable asteroseismic radii, masses and ages for large samples of stars (Metcalfe et al. 2014), but the most recent observations are so precise that we are now limited by systematic uncertainties associated with our stellar models. With a huge archive of Kepler data available for model validation, and the next planet-hunting satellite already approved for an expected launch in 2017, now is the time to incorporate what we have learned into the next generation of AMP. We propose to improve the reliability of our estimates of stellar properties over the next 4 years by collaborating with two open-source development projects that will augment and ultimately replace the stellar evolution and pulsation models that we now use in AMP. Our current treatment of the oscillations does not include the effects of radiative or convective heat-exchange, nor does it account for the influence

  1. Supervision of Student Teachers: How Adequate?

    ERIC Educational Resources Information Center

    Dean, Ken

    This study attempted to ascertain how adequately student teachers are supervised by college supervisors and supervising teachers. Questions to be answered were as follows: a) How do student teachers rate the adequacy of supervision given them by college supervisors and supervising teachers? and b) Are there significant differences between ratings…

  2. Small Rural Schools CAN Have Adequate Curriculums.

    ERIC Educational Resources Information Center

    Loustaunau, Martha

    The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…

  3. Toward More Adequate Quantitative Instructional Research.

    ERIC Educational Resources Information Center

    VanSickle, Ronald L.

    1986-01-01

    Sets an agenda for improving instructional research conducted with classical quantitative experimental or quasi-experimental methodology. Includes guidelines regarding the role of a social perspective, adequate conceptual and operational definition, quality instrumentation, control of threats to internal and external validity, and the use of…

  4. An Adequate Education Defined. Fastback 476.

    ERIC Educational Resources Information Center

    Thomas, M. Donald; Davis, E. E. (Gene)

    Court decisions historically have dealt with educational equity; now they are helping to establish "adequacy" as a standard in education. Legislatures, however, have been slow to enact remedies. One debate over education adequacy, though, is settled: Schools are not financed at an adequate level. This fastback is divided into three sections.…

  5. Funding the Formula Adequately in Oklahoma

    ERIC Educational Resources Information Center

    Hancock, Kenneth

    2015-01-01

    This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…

  6. Traceable Calibration of a Radiation Thermometer in the Range 100 °C to 300 °C by Model Fitting

    NASA Astrophysics Data System (ADS)

    Olsen, Åge Andreas Falnes; Bergerud, Reidun Anita

    2015-08-01

    The Norwegian Metrology Service (JV) offers calibration of blackbodies, thermal imagers, and radiation thermometers to the national clients. The temperature measurements are traceable to the ITS-90 with a set of reference blackbodies covering the range from 10 °C to 1700 °C. However, between 100 °C and 300 °C we do not have a direct measurement of the cavity temperature from a traceable sensor, and rely instead on a pyrometer to provide the reference temperature. The pyrometer is regularly calibrated externally at a handful of predefined temperatures. In this work we present a calibration scheme for the pyrometer which allows calibration at the JV premises: the pyrometer is set to record the measured radiation level at predefined temperatures below 100 °C and just above 300 °C. The calibration data are used to fit a Sakuma-Hattori model, and subsequent readings of the radiation level can be input to the model to extract the corresponding temperature. We present uncertainty budgets for the calibration data, which is subsequently used to estimate a combined uncertainty at arbitrary measured temperatures between 100 °C and 300 °C. Finally, temperatures obtained with the described scheme are compared with recent calibration values obtained externally, and we show that this is a reasonable way to achieve traceable calibration of the pyrometer with adequate precision and low uncertainty. The model fitting has the added benefit of a continuous calibration curve throughout the relevant temperature range rather than at a handful of arbitrary points.

  7. The effects of a peer modeling intervention on cardiorespiratory fitness parameters and self-efficacy in obese adolescents.

    PubMed

    De Jesus, Stefanie; Prapavessis, Harry

    2013-01-01

    Inconsistencies exist in the assessment and interpretation of peak VO2 in the pediatric obese population, as cardiorespiratory fitness assessments are effort-dependent and psychological variables prevalent in this population must be addressed. This study examined the effect of a peer modeling intervention on cardiorespiratory fitness performance and task self-efficacy in obese youth completing a maximal treadmill test. Forty-nine obese (BMI ≥ 95th percentile for age and sex) youth were randomized to an experimental (received an intervention) or to a control group. The outcome variables were mean and variability cardiorespiratory fitness (peak VO2, heart rate, duration, respiratory exchange ratio), rating of perceived exertion, and task self-efficacy scores. Irrespective of whether a mean or variability score was used, receiving the intervention was associated with non-significant trends in fitness parameters and task self-efficacy over time, favoring the experimental group. Cardiorespiratory fitness and task self-efficacy were moderately correlated at both time points. To elucidate the aforementioned findings, psychosocial factors affecting obese youth and opportunities to modify the peer modeling intervention should be considered. Addressing these factors has the potential to improve standard of care in a clinical setting regarding pretest patient education.

  8. The Effect of Fitting a Unidimensional IRT Model to Multidimensional Data in Content-Balanced Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Song, Tian

    2010-01-01

    This study investigates the effect of fitting a unidimensional IRT model to multidimensional data in content-balanced computerized adaptive testing (CAT). Unconstrained CAT with the maximum information item selection method is chosen as the baseline, and the performances of three content balancing procedures, the constrained CAT (CCAT), the…

  9. Limited-Information Goodness-of-Fit Testing of Diagnostic Classification Item Response Theory Models. CRESST Report 840

    ERIC Educational Resources Information Center

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2014-01-01

    It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…

  10. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  11. Promoting Fitness and Safety in Elementary Students: A Randomized Control Study of the Michigan Model for Health

    ERIC Educational Resources Information Center

    O'Neill, James M.; Clark, Jeffrey K.; Jones, James A.

    2016-01-01

    Background: In elementary grades, comprehensive health education curricula have demonstrated effectiveness in addressing singular health issues. The Michigan Model for Health (MMH) was implemented and evaluated to determine its impact on nutrition, physical fitness, and safety knowledge and skills. Methods: Schools (N = 52) were randomly assigned…

  12. A Person-Centered Approach to P-E Fit Questions Using a Multiple-Trait Model.

    ERIC Educational Resources Information Center

    De Fruyt, Filip

    2002-01-01

    Employed college students (n=401) completed the Self-Directed Search and NEO Personality Inventory-Revised. Person-environment fit across Holland's six personality types predicted job satisfaction and skill development. Five-Factor Model traits significantly predicted intrinsic career outcomes. Use of the five-factor, person-centered approach to…

  13. The Use of the L[subscript z] and L[subscript z]* Person-Fit Statistics and Problems Derived from Model Misspecification

    ERIC Educational Resources Information Center

    Meijer, Rob R.; Tendeiro, Jorge N.

    2012-01-01

    We extend a recent didactic by Magis, Raiche, and Beland on the use of the l[subscript z] and l[subscript z]* person-fit statistics. We discuss a number of possibly confusing details and show that it is important to first investigate item response theory model fit before assessing person fit. Furthermore, it is argued that appropriate…

  14. Facultative Control of Matrix Production Optimizes Competitive Fitness in Pseudomonas aeruginosa PA14 Biofilm Models

    PubMed Central

    Madsen, Jonas S.; Lin, Yu-Cheng; Squyres, Georgia R.; Price-Whelan, Alexa; de Santiago Torio, Ana; Song, Angela; Cornell, William C.; Sørensen, Søren J.

    2015-01-01

    As biofilms grow, resident cells inevitably face the challenge of resource limitation. In the opportunistic pathogen Pseudomonas aeruginosa PA14, electron acceptor availability affects matrix production and, as a result, biofilm morphogenesis. The secreted matrix polysaccharide Pel is required for pellicle formation and for colony wrinkling, two activities that promote access to O2. We examined the exploitability and evolvability of Pel production at the air-liquid interface (during pellicle formation) and on solid surfaces (during colony formation). Although Pel contributes to the developmental response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure formation in response to electron acceptor limitation is unique to specific biofilm models and that the facultative control of Pel production is required for PA14 to maintain optimum benefit in different types of communities. PMID:26431965

  15. Linking the Fits, Fitting the Links: Connecting Different Types of PO Fit to Attitudinal Outcomes

    ERIC Educational Resources Information Center

    Leung, Aegean; Chaturvedi, Sankalp

    2011-01-01

    In this paper we explore the linkages among various types of person-organization (PO) fit and their effects on employee attitudinal outcomes. We propose and test a conceptual model which links various types of fits--objective fit, perceived fit and subjective fit--in a hierarchical order of cognitive information processing and relate them to…

  16. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  17. Model-based analysis of multi-shell diffusion MR data for tractography: How to get over fitting problems

    PubMed Central

    Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ

    2012-01-01

    In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356

  18. Use of evolutionary information in the fitting of atomic level protein models in low resolution cryo-EM map of a protein assembly improves the accuracy of the fitting.

    PubMed

    Joseph, Agnel P; Swapna, Lakshmipuram S; Rakesh, Ramachandran; Srinivasan, Narayanaswamy

    2016-09-01

    Protein-protein interface residues, especially those at the core of the interface, exhibit higher conservation than residues in solvent exposed regions. Here, we explore the ability of this differential conservation to evaluate fittings of atomic models in low-resolution cryo-EM maps and select models from the ensemble of solutions that are often proposed by different model fitting techniques. As a prelude, using a non-redundant and high-resolution structural dataset involving 125 permanent and 95 transient complexes, we confirm that core interface residues are conserved significantly better than nearby non-interface residues and this result is used in the cryo-EM map analysis. From the analysis of inter-component interfaces in a set of fitted models associated with low-resolution cryo-EM maps of ribosomes, chaperones and proteasomes we note that a few poorly conserved residues occur at interfaces. Interestingly a few conserved residues are not in the interface, though they are close to the interface. These observations raise the potential requirement of refitting the models in the cryo-EM maps. We show that sampling an ensemble of models and selection of models with high residue conservation at the interface and in good agreement with the density helps in improving the accuracy of the fit. This study indicates that evolutionary information can serve as an additional input to improve and validate fitting of atomic models in cryo-EM density maps. PMID:27444391

  19. Use of evolutionary information in the fitting of atomic level protein models in low resolution cryo-EM map of a protein assembly improves the accuracy of the fitting.

    PubMed

    Joseph, Agnel P; Swapna, Lakshmipuram S; Rakesh, Ramachandran; Srinivasan, Narayanaswamy

    2016-09-01

    Protein-protein interface residues, especially those at the core of the interface, exhibit higher conservation than residues in solvent exposed regions. Here, we explore the ability of this differential conservation to evaluate fittings of atomic models in low-resolution cryo-EM maps and select models from the ensemble of solutions that are often proposed by different model fitting techniques. As a prelude, using a non-redundant and high-resolution structural dataset involving 125 permanent and 95 transient complexes, we confirm that core interface residues are conserved significantly better than nearby non-interface residues and this result is used in the cryo-EM map analysis. From the analysis of inter-component interfaces in a set of fitted models associated with low-resolution cryo-EM maps of ribosomes, chaperones and proteasomes we note that a few poorly conserved residues occur at interfaces. Interestingly a few conserved residues are not in the interface, though they are close to the interface. These observations raise the potential requirement of refitting the models in the cryo-EM maps. We show that sampling an ensemble of models and selection of models with high residue conservation at the interface and in good agreement with the density helps in improving the accuracy of the fit. This study indicates that evolutionary information can serve as an additional input to improve and validate fitting of atomic models in cryo-EM density maps.

  20. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  1. Quasispecies on Fitness Landscapes.

    PubMed

    Schuster, Peter

    2016-01-01

    Selection-mutation dynamics is studied as adaptation and neutral drift on abstract fitness landscapes. Various models of fitness landscapes are introduced and analyzed with respect to the stationary mutant distributions adopted by populations upon them. The concept of quasispecies is introduced, and the error threshold phenomenon is analyzed. Complex fitness landscapes with large scatter of fitness values are shown to sustain error thresholds. The phenomenological theory of the quasispecies introduced in 1971 by Eigen is compared to approximation-free numerical computations. The concept of strong quasispecies understood as mutant distributions, which are especially stable against changes in mutations rates, is presented. The role of fitness neutral genotypes in quasispecies is discussed.

  2. Testing the Youth Physical Activity Promotion Model: Fatness and Fitness as Enabling Factors

    ERIC Educational Resources Information Center

    Chen, Senlin; Welk, Gregory J.; Joens-Matre, Roxane R.

    2014-01-01

    As the prevalence of childhood obesity increases, it is important to examine possible differences in psychosocial correlates of physical activity between normal weight and overweight children. The study examined fatness (weight status) and (aerobic) fitness as Enabling factors related to youth physical activity within the Youth Physical Activity…

  3. Implementation of a Personal Fitness Unit Using the Personalized System of Instruction Model

    ERIC Educational Resources Information Center

    Prewitt, Steven; Hannon, James C.; Colquitt, Gavin; Brusseau, Timothy A.; Newton, Maria; Shaw, Janet

    2015-01-01

    Levels of physical activity and health-related fitness (HRF) are decreasing among adolescents in the United States. Several interventions have been implemented to reverse this downtrend. Traditionally, physical educators incorporate a direct instruction (DI) strategy, with teaching potentially leading students to disengage during class. An…

  4. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  5. Gray Matter Correlates of Fluid, Crystallized, and Spatial Intelligence: Testing the P-FIT Model

    ERIC Educational Resources Information Center

    Colom, Roberto; Haier, Richard J.; Head, Kevin; Alvarez-Linera, Juan; Quiroga, Maria Angeles; Shih, Pei Chun; Jung, Rex E.

    2009-01-01

    The parieto-frontal integration theory (P-FIT) nominates several areas distributed throughout the brain as relevant for intelligence. This theory was derived from previously published studies using a variety of both imaging methods and tests of cognitive ability. Here we test this theory in a new sample of young healthy adults (N = 100) using a…

  6. CALORIMETRY OF GRB 030329: SIMULTANEOUS MODEL FITTING TO THE BROADBAND RADIO AFTERGLOW AND THE OBSERVED IMAGE EXPANSION RATE

    SciTech Connect

    Mesler, Robert A.; Pihlstroem, Ylva M.

    2013-09-01

    We perform calorimetry on the bright gamma-ray burst GRB 030329 by fitting simultaneously the broadband radio afterglow and the observed afterglow image size to a semi-analytic MHD and afterglow emission model. Our semi-analytic method is valid in both the relativistic and non-relativistic regimes, and incorporates a model of the interstellar scintillation that substantially effects the broadband afterglow below 10 GHz. The model is fitted to archival measurements of the afterglow flux from 1 day to 8.3 yr after the burst. Values for the initial burst parameters are determined and the nature of the circumburst medium is explored. Additionally, direct measurements of the lateral expansion rate of the radio afterglow image size allow us to estimate the initial Lorentz factor of the jet.

  7. Kompaneets Model Fitting of the Orion-Eridanus Superbubble. II. Thinking Outside of Barnard’s Loop

    NASA Astrophysics Data System (ADS)

    Pon, Andy; Ochsendorf, Bram B.; Alves, João; Bally, John; Basu, Shantanu; Tielens, Alexander G. G. M.

    2016-08-01

    The Orion star-forming region is the nearest active high-mass star-forming region and has created a large superbubble, the Orion–Eridanus superbubble. Recent work by Ochsendorf et al. has extended the accepted boundary of the superbubble. We fit Kompaneets models of superbubbles expanding in exponential atmospheres to the new larger shape of the Orion–Eridanus superbubble. We find that this larger morphology of the superbubble is consistent with the evolution of the superbubble being primarily controlled by expansion into the exponential Galactic disk ISM if the superbubble is oriented with the Eridanus side farther from the Sun than the Orion side. Unlike previous Kompaneets model fits that required abnormally small scale heights for the Galactic disk (<40 pc), we find morphologically consistent models with scale heights of 80 pc, similar to that expected for the Galactic disk.

  8. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    PubMed

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  9. Modeling the Raman spectrum of graphitic material in rock samples with fluorescence backgrounds: accuracy of fitting and uncertainty estimation.

    PubMed

    Gasda, Patrick J; Ogliore, Ryan C

    2014-01-01

    We propose a robust technique called Savitzky-Golay second-derivative (SGSD) fitting for modeling the in situ Raman spectrum of graphitic materials in rock samples such as carbonaceous chondrite meteorites. In contrast to non-derivative techniques, with assumed locally linear or nth-order polynomial fluorescence backgrounds, SGSD produces consistently good fits of spectra with variable background fluorescence of any slowly varying form, without fitting or subtracting the background. In combination with a Monte Carlo technique, SGSD calculates Raman parameters (such as peak width and intensity) with robust uncertainties. To explain why SGSD fitting is more accurate, we compare how different background subtraction techniques model the background fluorescence with the wide and overlapping peaks present in a real Raman spectrum of carbonaceous material. Then, the utility of SGSD is demonstrated with a set of real and simulated data compared to commonly used linear background techniques. Researchers may find the SGSD technique useful if their spectra contain intense background interference with unknown functional form or wide overlapping peaks, and when the uncertainty of the spectral data is not well understood.

  10. Fitting mathematical models to lactation curves from Holstein cows in the southwestern region of the state of Parana, Brazil.

    PubMed

    Ferreira, Abílio G T; Henrique, Douglas S; Vieira, Ricardo A M; Maeda, Emilyn M; Valotto, Altair A

    2015-03-01

    The objective of this study was to evaluate four mathematical models with regards to their fit to lactation curves of Holstein cows from herds raised in the southwestern region of the state of Parana, Brazil. Initially, 42,281 milk production records from 2005 to 2011 were obtained from "Associação Paranaense de Criadores de Bovinos da Raça Holandesa (APCBRH)". Data lacking dates of drying and total milk production at 305 days of lactation were excluded, resulting in a remaining 15,142 records corresponding to 2,441 Holstein cows. Data were sorted according to the parity order (ranging from one to six), and within each parity order the animals were divided into quartiles (Q25%, Q50%, Q75% and Q100%) corresponding to 305-day lactation yield. Within each parity order, for each quartile, four mathematical models were adjusted, two of which were predominantly empirical (Brody and Wood) whereas the other two presented more mechanistic characteristics (models Dijkstra and Pollott). The quality of fit was evaluated by the corrected Akaike information criterion. The Wood model showed the best fit in almost all evaluated situations and, therefore, may be considered as the most suitable model to describe, at least empirically, the lactation curves of Holstein cows raised in Southwestern Parana.

  11. Spectral fits with TCAF model : A global understanding of both temporal and spectral properties of black hole sources

    NASA Astrophysics Data System (ADS)

    Debnath, Dipak; Sarathi Pal, Partha; Chakrabarti, Sandip Kumar; Mondal, Santanu; Jana, Arghajit; Chatterjee, Debjit; Molla, Aslam Ali

    2016-07-01

    There are many theoretical and phenomenological models in the literature which explain physics of accretion around black holes (BHs). Some of these models assume ad hoc components to explain different timing and spectral aspects of black hole candidates (BHCs) which no necessarily follow from physical equations. Chakrabarti and his collaborators, on the other hand claim in the last two decades that the spectral and timing properties of BHCs must not be treated separately since variation of these properties happens due to variation of two component (Keplerian and sub-Keplerian) accretion flow rates, and the Compton cloud parameters only. Recently after the inclusion of Two-component advective flow (TCAF) model in to HEASARC's spectral analysis software package XSPEC as an additive local model, we found that TCAF is quite capable to describe the underlying accretion flow dynamics around BHs with spectral fitted physical parameters. Properties of different spectral states and their transitions during an outburst of a transient BHC are more clear. A strong correlation between spectral and timing properties could also be seen in Accretion Rate Ratio Intensity Diagram (ARRID), where transitions between different spectral states are prominent. One can also predict frequency of the dominating quasi-periodic oscillation (QPO) from TCAF model fitted shock parameters and even predict the most probable mass range of an unknown BHC from TCAF fits. This gives us a confidence that the description of accretion process is more clear than ever before.

  12. Measures of relative fitness of social behaviors in finite structured population models.

    PubMed

    Tarnita, Corina E; Taylor, Peter D

    2014-10-01

    How should we measure the relative selective advantage of different behavioral strategies? The various approaches to this question have fallen into one of the following categories: the fixation probability of a mutant allele in a wild type population, some measures of gene frequency and gene frequency change, and a formulation of the inclusive fitness effect. Countless theoretical studies have examined the relationship between these approaches, and it has generally been thought that, under standard simplifying assumptions, they yield equivalent results. Most of this theoretical work, however, has assumed homogeneity of the population interaction structure--that is, that all individuals are equivalent. We explore the question of selective advantage in a general (heterogeneous) population and show that, although appropriate measures of fixation probability and gene frequency change are equivalent, they are not, in general, equivalent to the inclusive fitness effect. The latter does not reflect effects of selection acting via mutation, which can arise on heterogeneous structures, even for low mutation. Our theoretical framework provides a transparent analysis of the different biological factors at work in the comparison of these fitness measures and suggests that their theoretical and empirical use needs to be revised and carefully grounded in a more general theory.

  13. Modeling and Maximum Likelihood Fitting of Gamma-Ray and Radio Light Curves of Millisecond Pulsars Detected with Fermi

    NASA Technical Reports Server (NTRS)

    Johnson, T. J.; Harding, A. K.; Venter, C.

    2012-01-01

    Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.

  14. Reproductive fitness and dietary choice behavior of the genetic model organism Caenorhabditis elegans under semi-natural conditions.

    PubMed

    Freyth, Katharina; Janowitz, Tim; Nunes, Frank; Voss, Melanie; Heinick, Alexander; Bertaux, Joanne; Scheu, Stefan; Paul, Rüdiger J

    2010-10-01

    Laboratory breeding conditions of the model organism C. elegans do not correspond with the conditions in its natural soil habitat. To assess the consequences of the differences in environmental conditions, the effects of air composition, medium and bacterial food on reproductive fitness and/or dietary-choice behavior of C. elegans were investigated. The reproductive fitness of C. elegans was maximal under oxygen deficiency and not influenced by a high fractional share of carbon dioxide. In media approximating natural soil structure, reproductive fitness was much lower than in standard laboratory media. In seminatural media, the reproductive fitness of C. elegans was low with the standard laboratory food bacterium E. coli (γ-Proteobacteria), but significantly higher with C. arvensicola (Bacteroidetes) and B. tropica (β-Proteobacteria) as food. Dietary-choice experiments in semi-natural media revealed a low preference of C. elegans for E. coli but significantly higher preferences for C. arvensicola and B. tropica (among other bacteria). Dietary-choice experiments under quasi-natural conditions, which were feasible by fluorescence in situ hybridization (FISH) of bacteria, showed a high preference of C. elegans for Cytophaga-Flexibacter-Bacteroides, Firmicutes, and β-Proteobacteria, but a low preference for γ-Proteobacteria. The results show that data on C. elegans under standard laboratory conditions have to be carefully interpreted with respect to their biological significance.

  15. Goodness-of-fit tests for the additive risk model with (p > 2)-dimensional time-invariant covariates.

    PubMed

    Kim, J; Song, M S; Lee, S

    1998-01-01

    This paper presents methods for checking the goodness-of-fit of the additive risk model with p(> 2)-dimensional time-invariant covariates. The procedures are an extension of Kim and Lee (1996) who developed a test to assess the additive risk assumption for two-sample censored data. We apply the proposed tests to survival data from South Wales nikel refinery workers. Simulation studies are carried out to investigate the performance of the proposed tests for practical sample sizes. PMID:9880997

  16. Impaired Virulence and Fitness of a Colistin-Resistant Clinical Isolate of Acinetobacter baumannii in a Rat Model of Pneumonia

    PubMed Central

    Hraiech, Sami; Roch, Antoine; Lepidi, Hubert; Atieh, Thérèse; Audoly, Gilles; Rolain, Jean-Marc; Raoult, Didier; Brunel, Jean-Michel; Papazian, Laurent

    2013-01-01

    We compared the fitness and lung pathogenicity of two isogenic clinical isolates of Acinetobacter baumannii, one resistant (ABCR) and the other susceptible (ABCS) to colistin. In vitro, ABCR exhibited slower growth kinetics than ABCS. In a rat model of pneumonia, ABCR was associated with less pronounced signs of infection (lung bacterial count, systemic dissemination, and lung damage) and a better outcome (ABCR and ABCS mortality rates, 20 and 50%, respectively [P = 0.03]). PMID:23836181

  17. A Mixed Method Study Testing Data-Model Fit of a Retention Model for Latino/a Students at Urban Universities

    ERIC Educational Resources Information Center

    Torres, Vasti

    2006-01-01

    This study presents the conceptualization and subsequent model fit analysis of a retention model for Latino/a students at urban commuter universities. The three institutions involved in the study represent different environments for Latino/a students. Two are Hispanic Serving Institutions (HSI) and the third represents a predominantly White…

  18. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    PubMed Central

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  19. Comparison of model fitting and gated integration for pulse shape discrimination and spectral estimation of digitized lanthanum halide scintillator pulses

    NASA Astrophysics Data System (ADS)

    McFee, J. E.; Mosquera, C. M.; Faust, A. A.

    2016-08-01

    An analysis of digitized pulse waveforms from experiments with LaBr3(Ce) and LaCl3(Ce) detectors is presented. Pulse waveforms from both scintillator types were captured in the presence of 22Na and 60Co sources and also background alone. Two methods to extract pulse shape discrimination (PSD) parameters and estimate energy spectra were compared. The first involved least squares fitting of the pulse waveforms to a physics-based model of one or two exponentially modified Gaussian functions. The second was the conventional gated integration method. The model fitting method produced better PSD than gated integration for LaCl3(Ce) and higher resolution energy spectra for both scintillator types. A disadvantage to the model fitting approach is that it is more computationally complex and about 5 times slower. LaBr3(Ce) waveforms had a single decay component and showed no ability for alpha/electron PSD. LaCl3(Ce) was observed to have short and long decay components and alpha/electron discrimination was observed.

  20. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  1. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain.

    PubMed

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  2. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain.

    PubMed

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction.

  3. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  4. Evaluation of RGP Contact Lens Fitting in Keratoconus Patients Using Hierarchical Fuzzy Model and Genetic Algorithms.

    PubMed

    Falahati Marvast, Fatemeh; Arabalibeik, Hossein; Alipour, Fatemeh; Sheikhtaheri, Abbas; Nouri, Leila; Soozande, Mehdi; Yarmahmoodi, Masood

    2016-01-01

    Keratoconus is a progressive non-inflammatory disease of the cornea. Rigid gas permeable contact lenses (RGPs) are prescribed when the disease progresses. Contact lens fitting and assessment is very difficult in these patients and is a concern of ophthalmologists and optometrists. In this study, a hierarchical fuzzy system is used to capture the expertise of experienced ophthalmologists during the lens evaluation phase of prescription. The system is fine-tuned using genetic algorithms. Sensitivity, specificity and accuracy of the final system are 88.9%, 94.4% and 92.6% respectively. PMID:27046564

  5. Fabrication of single-electron devices using dispersed nanoparticles and fitting experimental results to values calculated based on percolation model

    NASA Astrophysics Data System (ADS)

    Moriya, Masataka; Huong, Tran Thi Thu; Matsumoto, Kazuhiko; Shimada, Hiroshi; Kimura, Yasuo; Hirano-Iwata, Ayumi; Mizugaki, Yoshinao

    2016-08-01

    We calculated the connection probability, P C, between electrodes on the basis of the triangular lattice percolation model for investigating the effect of distance variation between electrodes and the electrode width on fabricated capacitively coupled single-electron transistors. Single-electron devices were fabricated via the dispersion of gold nanoparticles (NPs). The NPs were dispersed via the repeated dropping of an NP solution onto a chip. The experimental results were fitted to the calculated values, and the fitting parameters were compared with the occupation probability, P O, which was estimated for one drop of the NP solution. On the basis of curves of the drain current versus the drain-source voltage ( I D- V DS) measured at 77 K, the current was suppressed at approximately 0 V.

  6. Influence of a health-related physical fitness model on students' physical activity, perceived competence, and enjoyment.

    PubMed

    Fu, You; Gao, Zan; Hannon, James; Shultz, Barry; Newton, Maria; Sibthorp, Jim

    2013-12-01

    This study was designed to explore the effects of a health-related physical fitness physical education model on students' physical activity, perceived competence, and enjoyment. 61 students (25 boys, 36 girls; M age = 12.6 yr., SD = 0.6) were assigned to two groups (health-related physical fitness physical education group, and traditional physical education group), and participated in one 50-min. weekly basketball class for 6 wk. Students' in-class physical activity was assessed using NL-1000 pedometers. The physical subscale of the Perceived Competence Scale for Children was employed to assess perceived competence, and children's enjoyment was measured using the Sport Enjoyment Scale. The findings suggest that students in the intervention group increased their perceived competence, enjoyment, and physical activity over a 6-wk. intervention, while the comparison group simply increased physical activity over time. Children in the intervention group had significantly greater enjoyment.

  7. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    NASA Astrophysics Data System (ADS)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  8. Seasonal and nonseasonal dynamics of Aedes aegypti in Rio de Janeiro, Brazil: fitting mathematical models to trap data.

    PubMed

    Lana, Raquel M; Carneiro, Tiago G S; Honório, Nildimar A; Codeço, Cláudia T

    2014-01-01

    Mathematical models suggest that seasonal transmission and temporary cross-immunity between serotypes can determine the characteristic multi-year dynamics of dengue fever. Seasonal transmission is attributed to the effect of climate on mosquito abundance and within host virus dynamics. In this study, we validate a set of temperature and density dependent entomological models that are built-in components of most dengue models by fitting them to time series of ovitrap data from three distinct neighborhoods in Rio de Janeiro, Brazil. The results indicate that neighborhoods differ in the strength of the seasonal component and that commonly used models tend to assume more seasonal structure than found in data. Future dengue models should investigate the impact of heterogeneous levels of seasonality on dengue dynamics as it may affect virus maintenance from year to year, as well as the risk of disease outbreaks. PMID:23933186

  9. New model fit functions of the plasmapause location determined using THEMIS observations during the ascending phase of Solar Cycle 24

    NASA Astrophysics Data System (ADS)

    Cho, Junghee; Lee, Dae-Young; Kim, Jin-Hee; Shin, Dae-Kyu; Kim, Kyung-Chan; Turner, Drew

    2015-04-01

    It is well known that the plasmapause is influenced by the solar wind and magnetospheric conditions. Empirical models of its location have been previously developed such as those by O'Brien and Moldwin (2003) and Larsen et al. (2006). In this study, we identified the locations of the plasmapause using the plasma density data obtained from the Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellites. We used the data for the period (2008-2012) corresponding to the ascending phase of Solar Cycle 24. Our database includes data from over a year of unusually weak solar wind conditions, correspondingly covering the plasmapause locations in a wider L range than those in previous studies. It also contains many coronal hole stream intervals during which the plasmasphere is eroded and recovers over a timescale of several days. The plasmapause was rigorously determined by requiring a density gradient by a factor of 15 within a radial distance of 0.5 L. We first determined the statistical correlation of the plasmapause locations with several solar wind parameters as well as geomagnetic indices. We found that the plasmapause locations are well correlated with the solar wind speed and the interplanetary magnetic field Bz, therefore the y component of the convective electric field, and some energy coupling functions such as the well-known Akasofu's epsilon parameter. The plasmapause locations are also highly correlated with the geomagnetic indices, Dst, AE, and Kp, as recognized previously. Finally, we suggest new model fit functions for the plasmapause locations in terms of the solar wind parameters and geomagnetic indices. When applied to a new data interval outside the model training interval, our model fit functions work better than existing ones. The new model fit functions developed here extend the range of conditions from those used in previous works.

  10. Fit Assessment of N95 Filtering-Facepiece Respirators in the U.S. Centers for Disease Control and Prevention Strategic National Stockpile

    PubMed Central

    Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew

    2016-01-01

    National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes. PMID:26877587

  11. A Paradox between IRT Invariance and Model-Data Fit When Utilizing the One-Parameter and Three-Parameter Models

    ERIC Educational Resources Information Center

    Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon

    2008-01-01

    The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…

  12. Is a vegetarian diet adequate for children.

    PubMed

    Hackett, A; Nathan, I; Burgess, L

    1998-01-01

    The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise.

  13. Extensive fitness and human cooperation.

    PubMed

    van Hateren, J H

    2015-12-01

    Evolution depends on the fitness of organisms, the expected rate of reproducing. Directly getting offspring is the most basic form of fitness, but fitness can also be increased indirectly by helping genetically related individuals (such as kin) to increase their fitness. The combined effect is known as inclusive fitness. Here it is argued that a further elaboration of fitness has evolved, particularly in humans. It is called extensive fitness and it incorporates producing organisms that are merely similar in phenotype. The evolvability of this mechanism is illustrated by computations on a simple model combining heredity and behaviour. Phenotypes are driven into the direction of high fitness through a mechanism that involves an internal estimate of fitness, implicitly made within the organism itself. This mechanism has recently been conjectured to be responsible for producing agency and goals. In the model, inclusive and extensive fitness are both implemented by letting fitness increase nonlinearly with the size of subpopulations of similar heredity (for the indirect part of inclusive fitness) and of similar phenotype (for the phenotypic part of extensive fitness). Populations implementing extensive fitness outcompete populations implementing mere inclusive fitness. This occurs because groups with similar phenotype tend to be larger than groups with similar heredity, and fitness increases more when groups are larger. Extensive fitness has two components, a direct component where individuals compete in inducing others to become like them and an indirect component where individuals cooperate and help others who are already similar to them.

  14. Fit Indices Versus Test Statistics

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai

    2005-01-01

    Model evaluation is one of the most important aspects of structural equation modeling (SEM). Many model fit indices have been developed. It is not an exaggeration to say that nearly every publication using the SEM methodology has reported at least one fit index. Most fit indices are defined through test statistics. Studies and interpretation of…

  15. Geometric model-based fitting algorithm for orientation-selective PELDOR data

    NASA Astrophysics Data System (ADS)

    Abdullin, Dinar; Hagelueken, Gregor; Hunter, Robert I.; Smith, Graham M.; Schiemann, Olav

    2015-03-01

    Pulsed electron-electron double resonance (PELDOR or DEER) spectroscopy is frequently used to determine distances between spin centres in biomacromolecular systems. Experiments where mutual orientations of the spin pair are selectively excited provide the so-called orientation-selective PELDOR data. This data is characterised by the orientation dependence of the modulation depth parameter and of the dipolar frequencies. This dependence has to be taken into account in the data analysis in order to extract distance distributions accurately from the experimental time traces. In this work, a fitting algorithm for such data analysis is discussed. The approach is tested on PELDOR data-sets from the literature and is compared with the previous results.

  16. Unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance

    USGS Publications Warehouse

    Fiske, I.J.; Chandler, R.B.

    2011-01-01

    Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientic questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mecha- nisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unied modeling in- terface. The R package unmarked provides such a unied modeling framework, including tools for data exploration, model tting, model criticism, post-hoc analysis, and model comparison.

  17. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  18. Exchange interactions in [2 × 2] Cu(II) grids: on the reliability of the fitting spin models.

    PubMed

    Calzado, Carmen J; Evangelisti, Stefano

    2014-02-21

    This paper reports a theoretical analysis of the electronic structure and magnetic properties of a ferromagnetic Cu(II) [2 × 2] grid. The calculations confirm a quintet (S = 2) ground state and an energy-level distribution of the magnetic states in accordance with Heisenberg behaviour. The whole set of first- and second-neighbour magnetic coupling constants has been evaluated, all in agreement with the structure and arrangement of the Cu 3dx(2) - y(2) magnetic orbitals. The results indicate that the dominant interaction in the system is the ferromagnetic coupling between the nearest Cu sites. The calculated J values suggest a C(2v) spin-spin interaction pattern, instead of the D(4h) model employed in the magnetic data fit. However, both spin models provide similar plots of the thermal dependence of the susceptibility and magnetic moment data. This study highlights the fact that the spin models resulting from the fittings can be just effective models, capable of correctly reproducing the macroscopic properties, although not always in accordance with the microscopic interactions governing these properties.

  19. Testing evolutionary models of senescence in a natural population: age and inbreeding effects on fitness components in song sparrows

    PubMed Central

    Keller, L.F; Reid, J.M; Arcese, P

    2008-01-01

    Mutation accumulation (MA) and antagonistic pleiotropy (AP) have each been hypothesized to explain the evolution of ‘senescence’ or deteriorating fitness in old age. These hypotheses make contrasting predictions concerning age dependence in inbreeding depression in traits that show senescence. Inbreeding depression is predicted to increase with age under MA but not under AP, suggesting one empirical means by which the two can be distinguished. We use pedigree and life-history data from free-living song sparrows (Melospiza melodia) to test for additive and interactive effects of age and individual inbreeding coefficient (f) on fitness components, and thereby assess the evidence for MA. Annual reproductive success (ARS) and survival (and therefore reproductive value) declined in old age in both sexes, indicating senescence in this short-lived bird. ARS declined with f in both sexes and survival declined with f in males, indicating inbreeding depression in fitness. We observed a significant age×f interaction for male ARS (reflecting increased inbreeding depression as males aged), but not for female ARS or survival in either sex. These analyses therefore provide mixed support for MA. We discuss the strengths and limitations of such analyses and therefore the value of natural pedigreed populations in testing evolutionary models of senescence. PMID:18211879

  20. Modelling the Factors that Affect Individuals' Utilisation of Online Learning Systems: An Empirical Study Combining the Task Technology Fit Model with the Theory of Planned Behaviour

    ERIC Educational Resources Information Center

    Yu, Tai-Kuei; Yu, Tai-Yi

    2010-01-01

    Understanding learners' behaviour, perceptions and influence in terms of learner performance is crucial to predict the use of electronic learning systems. By integrating the task-technology fit (TTF) model and the theory of planned behaviour (TPB), this paper investigates the online learning utilisation of Taiwanese students. This paper provides a…

  1. Numerical simulation of a relaxation test designed to fit a quasi-linear viscoelastic model for temporomandibular joint discs.

    PubMed

    Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime

    2013-02-01

    The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller

  2. Fitting Proportional Odds Models to Educational Data in Ordinal Logistic Regression Using Stata, SAS and SPSS

    ERIC Educational Resources Information Center

    Liu, Xing

    2008-01-01

    The proportional odds (PO) model, which is also called cumulative odds model (Agresti, 1996, 2002 ; Armstrong & Sloan, 1989; Long, 1997, Long & Freese, 2006; McCullagh, 1980; McCullagh & Nelder, 1989; Powers & Xie, 2000; O'Connell, 2006), is one of the most commonly used models for the analysis of ordinal categorical data and comes from the class…

  3. Fitting the Normal-Ogive Factor Analytic Model to Scores on Tests.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo-Seva, Urbano

    2001-01-01

    Describes how the nonlinear factor analytic approach of R. McDonald to the normal ogive curve can be used to factor analyze test scores. Discusses the conditions in which this model is more appropriate than the linear model and illustrates the applicability of both models using an empirical example based on data from 1,769 adolescents who took the…

  4. The Body Micropolitic: Where Do Women Educational Leaders Fit in the Easton Model of Policy Analysis?

    ERIC Educational Resources Information Center

    Raveling, Joyce S.

    This paper asks where and how women and women's styles of leadership are situated in Easton's Model of Policy Process. The Easton Model provides a means of understanding policy development from a micropolitical perspective and offers analysis of environmental influences on policy decision making and implementation, allowing the model to…

  5. Culture and Parenting: Family Models Are Not One-Size-Fits-All. FPG Snapshot #67

    ERIC Educational Resources Information Center

    FPG Child Development Institute, 2012

    2012-01-01

    Family process models guide theories and research about family functioning and child development outcomes. Theory and research, in turn, inform policies and services aimed at families. But are widely accepted models valid across cultural groups? To address these gaps, FPG researchers examined the utility of two family process models for families…

  6. Fitting Procedures for Novel Gene-by-Measured Environment Interaction Models in Behavior Genetic Designs.

    PubMed

    Zheng, Hao; Rathouz, Paul J

    2015-07-01

    For quantitative behavior genetic (e.g., twin) studies, Purcell proposed a novel model for testing gene-by-measured environment (GxM) interactions while accounting for gene-by-environment correlation. Rathouz et al. expanded this model into a broader class of non-linear biometric models for quantifying and testing such interactions. In this work, we propose a novel factorization of the likelihood for this class of models, and adopt numerical integration techniques to achieve model estimation, especially for those without close-form likelihood. The validity of our procedures is established through numerical simulation studies. The new procedures are illustrated in a twin study analysis of the moderating effect of birth weight on the genetic influences on childhood anxiety. A second example is given in an online appendix. Both the extant GxM models and the new non-linear models critically assume normality of all structural components, which implies continuous, but not normal, manifest response variables.

  7. Fitting bevacizumab aggregation kinetic data with the Finke-Watzky two-step model: Effect of thermal and mechanical stress.

    PubMed

    Oliva, Alexis; Llabrés, Matías; Fariña, José B

    2015-09-18

    Size exclusion chromatography with light scattering detection (SEC-MALLS) was assessed as a means to characterize the type of bevacizumab aggregates that form under mechanical and thermal stress, quantitatively monitoring the aggregation kinetics. The analytical method was monitored and verified during routine use at two levels: (1) the "pre-study" validation shows that the method is specific, linear, accurate, precise, robust and stability indicating; (2) the "in-study" validation was verified by inserting quality control samples and the use of control charts, indicating that the analytical method is in statistical control and stable. The aggregation kinetics data were interpreted using a modified Lumry-Eyring model, but the quality of the fit can be considered poor (R(2)>0.96), especially at higher temperatures. This indicates that the order of the reaction could not be reliably determined, suggesting a different degradation mechanism. The kinetic data set also fit the minimalistic Finke-Watzky (F-W) 2-step model, with an excellent quality of fit (R(2)>0.99), yielding the first quantitative rate constant for the steps of nucleation and growth in bevacizumab aggregation. The bevacizumab pharmaceutical preparation contains (initially) dimers, approximately 1.6% of bevacizumab total concentration, and the effect on aggregation kinetics of seeding was analyzed using the F-W 2-step model assuming [B]0≠0 (for the seeded case). The results suggested that the seeding had no impact on aggregation kinetics. Furthermore, the Arrhenius equation cannot be used to extrapolate the shelf-life since no linear temperature dependence of the rate constant was found within the temperature range. Although the real-time stability data provides the basis for determining the product shelf-life, predictive methodologies such as Vogel-Tammann-Fulcher (VFT) or the Arrhenius approach can be misleading and result in overestimates of the product shelf-life. However, they can be successfully

  8. MODELING THE NONLINEAR CLUSTERING IN MODIFIED GRAVITY MODELS. I. A FITTING FORMULA FOR THE MATTER POWER SPECTRUM OF f(R) GRAVITY

    SciTech Connect

    Zhao, Gong-Bo

    2014-04-01

    Based on a suite of N-body simulations of the Hu-Sawicki model of f(R) gravity with different sets of model and cosmological parameters, we develop a new fitting formula with a numeric code, MGHalofit, to calculate the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. We compare the MGHalofit predictions at various redshifts (z ≤ 1) to the f(R) simulations and find that the relative error of the MGHalofit fitting formula of P(k) is no larger than 6% at k ≤ 1 h Mpc{sup –1} and 12% at k in (1, 10] h Mpc{sup –1}, respectively. Based on a sensitivity study of an ongoing and a future spectroscopic survey, we estimate the detectability of a signal of modified gravity described by the Hu-Sawicki model using the power spectrum up to quasi-nonlinear scales.

  9. The fitting of radioactive decay data by covariance methods

    SciTech Connect

    Smith, D.L.; Osadebe, F.A.N.

    1994-04-01

    The fitting of radioactive decay data is examined when radiations from two or more processes are indistinguishable. The model is a nonlinear sum of exponentials which cannot be linearized by transformations. Simple and generalized least-squares procedures utilizing covariance matrices are applied. The validity of the midpoint approximation is demonstrated. Guidelines for acquiring adequate radioactive decay data are suggested. The relevance to activation cross section determination is discussed.

  10. The universal Higgs fit

    NASA Astrophysics Data System (ADS)

    Giardino, Pier Paolo; Kannike, Kristjan; Masina, Isabella; Raidal, Martti; Strumia, Alessandro

    2014-05-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a `universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our `universal' fit. Latest data exclude the dilaton as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M h = 124 .4 ± 1 .6 GeV.

  11. Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data.

    PubMed

    Foi, Alessandro; Trimeche, Mejdi; Katkovnik, Vladimir; Egiazarian, Karen

    2008-10-01

    We present a simple and usable noise model for the raw-data of digital imaging sensors. This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data. We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor. We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image. Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model. PMID:18784024

  12. Comparative Study on the Selection Criteria for Fitting Flood Frequency Distribution Models with Emphasis on Upper-Tail Behavior

    NASA Astrophysics Data System (ADS)

    Xiaohong, C.

    2014-12-01

    Many probability distributions have been proposed for flood frequency analysis and several criteria have been used for selecting a best fitted distribution to an observed or generated data set by some random process. The upper tail of flood frequency distribution should be specifically concerned for flood control. However, different model selection criteria often result in different optimal distributions when focus on upper tail of flood frequency distribution. In this study, with emphasis on the upper-tail behavior, 5 distribution selection criteria including 2 hypothesis tests and 3 information-based criteria are evaluated in selecting the best fitted distribution from 8 widely used distributions (Pearson 3, Log-Pearson 3, two-parameter lognormal, three-parameter lognormal, Gumbel, Weibull, Generalized extreme value and Generalized logistic distributions) by using datasets from Thames River (UK), Wabash River (USA), Beijiang River and Huai River (China), which are all within latitude of 23.5-66.5 degrees north. The performance of the 5 selection criteria is verified by using a composite criterion focus on upper tail events defined in this study. This paper shows the approach for the optimal selection of suitable flood frequency distributions for different river basins. Results illustrate that (1) Different distributions are selected by using hypothesis tests and information-based criteria for each river. (2) The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. (3) In order to decide on a particular distribution to fit the high flow, it would be better to use the combination criteria, in which the information-based criteria can be used first to rank the models and the results are inspected by hypothesis testing methods. In addition, if the information-based criteria and hypothesis tests provide different results, the composite criterion will be taken for

  13. Effects of a Three-Tiered Intervention Model on Physical Activity and Fitness Levels of Elementary School Children.

    PubMed

    Dauenhauer, Brian; Keating, Xiaofen; Lambdin, Dolly

    2016-08-01

    Response to intervention (RtI) models are frequently used in schools to tailor academic instruction to the needs of students. The purpose of this study was to examine the effects of using RtI to promote physical activity (PA) and fitness in one urban elementary school. Ninety-nine students in grades 2-5 participated in up to three tiers of intervention throughout the course of one school year. Tier one included 150 min/week of physical education (increased from 90 min/week the previous year) and coordinated efforts to improve school health. Tier two consisted of 30 min/week of small group instruction based on goal setting and social support. Tier three included an after-school program for parents and children focused on healthy living. PA, cardiovascular fitness, and body composition were assessed before and after the interventions using pedometers, a 20-m shuttle run, and height/weight measurements. From pre- to post-testing, PA remained relatively stable in tier one and increased by 2349 steps/day in tier two. Cardiovascular fitness increased in tiers one and two by 1.17 and 1.35 ml/kg/min, respectively. Although body mass index did not change, 17 of the 99 students improved their weight status over the course of the school year, resulting in an overall decline in the prevalence of overweight/obesity from 59.6 to 53.5 %. Preliminary results suggest that the RtI model can be an effective way to structure PA/health interventions in an elementary school setting. PMID:27059849

  14. Structural model for gamma-aminobutyric acid receptor noncompetitive antagonist binding: widely diverse structures fit the same site.

    PubMed

    Chen, Ligong; Durkin, Kathleen A; Casida, John E

    2006-03-28

    Several major insecticides, including alpha-endosulfan, lindane, and fipronil, and the botanical picrotoxinin are noncompetitive antagonists (NCAs) for the GABA receptor. We showed earlier that human beta(3) homopentameric GABA(A) receptor recognizes all of the important GABAergic insecticides and reproduces the high insecticide sensitivity and structure-activity relationships of the native insect receptor. Despite large structural diversity, the NCAs are proposed to fit a single binding site in the chloride channel lumen lined by five transmembrane 2 segments. This hypothesis is examined with the beta(3) homopentamer by mutagenesis, pore structure studies, NCA binding, and molecular modeling. The 15 amino acids in the cytoplasmic half of the pore were mutated to cysteine, serine, or other residue for 22 mutants overall. Localization of A-1'C, A2'C, T6'C, and L9'C (index numbers for the transmembrane 2 region) in the channel lumen was established by disulfide cross-linking. Binding of two NCA radioligands [(3)H]1-(4-ethynylphenyl)-4-n-propyl-2,6,7-trioxabicyclo[2.2.2]octane and [(3)H] 3,3-bis-trifluoromethyl-bicyclo[2,2,1]heptane-2,2-dicarbonitrile was dramatically reduced with 8 of the 15 mutated positions, focusing attention on A2', T6', and L9' as proposed binding sites, consistent with earlier mutagenesis studies. The cytoplasmic half of the beta3 homopentamer pore was modeled as an alpha-helix. The six NCAs listed above plus t-butylbicyclophosphorothionate fit the 2' to 9' pore region forming hydrogen bonds with the T6' hydroxyl and hydrophobic interactions with A2', T6', and L9' alkyl substituents, thereby blocking the channel. Thus, widely diverse NCA structures fit the same GABA receptor beta subunit site with important implications for insecticide cross-resistance and selective toxicity between insects and mammals.

  15. Fitting HIV Prevalence 1981 Onwards for Three Indian States Using the Goals Model and the Estimation and Projection Package

    PubMed Central

    Bhatnagar, Tarun; Dutta, Tapati; Stover, John; Godbole, Sheela; Sahu, Damodar; Boopathi, Kangusamy; Bembalkar, Shilpa; Singh, Kh. Jitenkumar; Goyal, Rajat; Pandey, Arvind; Mehendale, Sanjay M.

    2016-01-01

    Models are designed to provide evidence for strategic program planning by examining the impact of different interventions on projected HIV incidence. We employed the Goals Model to fit the HIV epidemic curves in Andhra Pradesh, Maharashtra and Tamil Nadu states of India where HIV epidemic is considered to have matured and in a declining phase. Input data in the Goals Model consisted of demographic, epidemiological, transmission-related and risk group wise behavioral parameters. The HIV prevalence curves generated in the Goals Model for each risk group in the three states were compared with the epidemic curves generated by the Estimation and Projection Package (EPP) that the national program is routinely using. In all the three states, the HIV prevalence trends for high-risk populations simulated by the Goals Model matched well with those derived using state-level HIV surveillance data in the EPP. However, trends for the low- and medium-risk populations differed between the two models. This highlights the need to generate more representative and robust data in these sub-populations and consider some structural changes in the modeling equation and parameters in the Goals Model to effectively use it to assess the impact of future strategies of HIV control in various sub-populations in India at the sub-national level. PMID:27711212

  16. One size does not fit all: Adapting mark-recapture and occupancy models for state uncertainty

    USGS Publications Warehouse

    Kendall, W.L.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multistate capture?recapture models continue to be employed with greater frequency to test hypotheses about metapopulation dynamics and life history, and more recently disease dynamics. In recent years efforts have begun to adjust these models for cases where there is uncertainty about an animal?s state upon capture. These efforts can be categorized into models that permit misclassification between two states to occur in either direction or one direction, where state is certain for a subset of individuals or is always uncertain, and where estimation is based on one sampling occasion per period of interest or multiple sampling occasions per period. State uncertainty also arises in modeling patch occupancy dynamics. I consider several case studies involving bird and marine mammal studies that illustrate how misclassified states can arise, and outline model structures for properly utilizing the data that are produced. In each case misclassification occurs in only one direction (thus there is a subset of individuals or patches where state is known with certainty), and there are multiple sampling occasions per period of interest. For the cases involving capture?recapture data I allude to a general model structure that could include each example as a special case. However, this collection of cases also illustrates how difficult it is to develop a model structure that can be directly useful for answering every ecological question of interest and account for every type of data from the field.

  17. Total least squares fitting Michaelis-Menten enzyme kinetic model function

    NASA Astrophysics Data System (ADS)

    Jukic, Dragan; Sabo, Kristian; Scitovski, Rudolf

    2007-04-01

    The Michaelis-Menten enzyme kinetic model f(x;a,b)Dax/(b+x), a,b>0, is widely used in biochemistry, pharmacology, biology and medical research. Given the data (pi,xi,yi), i=1,...,m, m[greater-or-equal, slanted]3, we consider the total least squares (TLS) problem for the Michaelis-Menten modelE We show that it is possible that the TLS estimate does not exist. As the main result, we show that the TLS estimate exists if the data satisfy some natural conditions. Some numerical examples are included.

  18. Advantages of estimating parameters of photosynthesis model by fitting A-Ci curves at multiple subsaturating light intensities

    NASA Astrophysics Data System (ADS)

    Fu, W.; Gu, L.; Hoffman, F. M.

    2013-12-01

    The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance

  19. Does the first chaotic inflation model in supergravity provide the best fit to the Planck data?

    SciTech Connect

    Linde, Andrei

    2015-02-23

    I describe the first model of chaotic inflation in supergravity, which was proposed by Goncharov and the present author in 1983. The inflaton potential of this model has a plateau-type behavior V{sub 0}(1−(8/3) e{sup −√6|ϕ|}) at large values of the inflaton field. This model predicts n{sub s}=1−(2/N)≈0.967 and r=(4/(3N{sup 2}))≈4×10{sup −4}, in good agreement with the Planck data. I propose a slight generalization of this model, which allows to describe not only inflation but also dark energy and supersymmetry breaking.

  20. Approximate Confidence Interval for Difference of Fit in Structural Equation Models.

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2001-01-01

    Discusses a method, based on bootstrap methodology, for obtaining an approximate confidence interval for the difference in root mean square error of approximation of two structural equation models. Illustrates the method using a numerical example. (SLD)

  1. Macrophage ion currents are fit by a fractional model and therefore are a time series with memory.

    PubMed

    Domínguez, Darío Manuel; Marín, Mariela; Camacho, Marcela

    2009-04-01

    We studied macroscopic ion currents from macrophages and compared their patterns of behavior using classical and fractal analysis. Peak and steady state currents were measured respectively at the beginning and end of a voltage-clamp pulse. Hurst coefficients H and fractional dimensions were calculated for the current fluctuations (I(H)) during the intervening interval; these fluctuations are usually assumed to be white noise. We show that I(H) is different from 0.5 and that the increments are stationary, indicating that the dynamic model has memory and that the intervening current fluctuations cannot be considered as white noise. I(H) is less than 0.5, implying an antipersistent pattern. In addition, we show that the relation between inactivation and I(H) versus voltage V fit an equation I(H)(V) = f(V, alpha, m, d), where alpha is associated with fractional calculus and m and d are free parameters. Fitting by a fractional model confirms that the phenomenon has memory.

  2. Cognitive fitness.

    PubMed

    Gilkey, Roderick; Kilts, Clint

    2007-11-01

    Recent neuroscientific research shows that the health of your brain isn't, as experts once thought, just the product of childhood experiences and genetics; it reflects your adult choices and experiences as well. Professors Gilkey and Kilts of Emory University's medical and business schools explain how you can strengthen your brain's anatomy, neural networks, and cognitive abilities, and prevent functions such as memory from deteriorating as you age. The brain's alertness is the result of what the authors call cognitive fitness -a state of optimized ability to reason, remember, learn, plan, and adapt. Certain attitudes, lifestyle choices, and exercises enhance cognitive fitness. Mental workouts are the key. Brain-imaging studies indicate that acquiring expertise in areas as diverse as playing a cello, juggling, speaking a foreign language, and driving a taxicab expands your neural systems and makes them more communicative. In other words, you can alter the physical makeup of your brain by learning new skills. The more cognitively fit you are, the better equipped you are to make decisions, solve problems, and deal with stress and change. Cognitive fitness will help you be more open to new ideas and alternative perspectives. It will give you the capacity to change your behavior and realize your goals. You can delay senescence for years and even enjoy a second career. Drawing from the rapidly expanding body of neuroscience research as well as from well-established research in psychology and other mental health fields, the authors have identified four steps you can take to become cognitively fit: understand how experience makes the brain grow, work hard at play, search for patterns, and seek novelty and innovation. Together these steps capture some of the key opportunities for maintaining an engaged, creative brain. PMID:18159786

  3. A Flexible, Computationally Efficient Method for Fitting the Proportional Hazards Model to Interval-Censored Data

    PubMed Central

    Wang, Lianming; Hudgens, Michael G.; Qureshi, Zaina P.

    2015-01-01

    Summary The proportional hazards model (PH) is currently the most popular regression model for analyzing time-to-event data. Despite its popularity, the analysis of interval-censored data under the PH model can be challenging using many available techniques. This paper presents a new method for analyzing interval-censored data under the PH model. The proposed approach uses a monotone spline representation to approximate the unknown nondecreasing cumulative baseline hazard function. Formulating the PH model in this fashion results in a finite number of parameters to estimate while maintaining substantial modeling flexibility. A novel expectation-maximization (EM) algorithm is developed for finding the maximum likelihood estimates of the parameters. The derivation of the EM algorithm relies on a two-stage data augmentation involving latent Poisson random variables. The resulting algorithm is easy to implement, robust to initialization, enjoys quick convergence, and provides closed-form variance estimates. The performance of the proposed regression methodology is evaluated through a simulation study, and is further illustrated using data from a large population-based randomized trial designed and sponsored by the United States National Cancer Institute. PMID:26393917

  4. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  5. Heterogeneity in Genetic Diversity among Non-Coding Loci Fails to Fit Neutral Coalescent Models of Population History

    PubMed Central

    Peters, Jeffrey L.; Roberts, Trina E.; Winker, Kevin; McCracken, Kevin G.

    2012-01-01

    Inferring aspects of the population histories of species using coalescent analyses of non-coding nuclear DNA has grown in popularity. These inferences, such as divergence, gene flow, and changes in population size, assume that genetic data reflect simple population histories and neutral evolutionary processes. However, violating model assumptions can result in a poor fit between empirical data and the models. We sampled 22 nuclear intron sequences from at least 19 different chromosomes (a genomic transect) to test for deviations from selective neutrality in the gadwall (Anas strepera), a Holarctic duck. Nucleotide diversity among these loci varied by nearly two orders of magnitude (from 0.0004 to 0.029), and this heterogeneity could not be explained by differences in substitution rates alone. Using two different coalescent methods to infer models of population history and then simulating neutral genetic diversity under these models, we found that the observed among-locus heterogeneity in nucleotide diversity was significantly higher than expected for these simple models. Defining more complex models of population history demonstrated that a pre-divergence bottleneck was also unlikely to explain this heterogeneity. However, both selection and interspecific hybridization could account for the heterogeneity observed among loci. Regardless of the cause of the deviation, our results illustrate that violating key assumptions of coalescent models can mislead inferences of population history. PMID:22384117

  6. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.

    2015-12-01

    Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  7. Do telemonitoring projects of heart failure fit the Chronic Care Model?

    PubMed Central

    Willemse, Evi; Adriaenssens, Jef; Dilles, Tinne; Remmen, Roy

    2014-01-01

    This study describes the characteristics of extramural and transmural telemonitoring projects on chronic heart failure in Belgium. It describes to what extent these telemonitoring projects coincide with the Chronic Care Model of Wagner. Background The Chronic Care Model describes essential components for high-quality health care. Telemonitoring can be used to optimise home care for chronic heart failure. It provides a potential prospective to change the current care organisation. Methods This qualitative study describes seven non-invasive home-care telemonitoring projects in patients with heart failure in Belgium. A qualitative design, including interviews and literature review, was used to describe the correspondence of these home-care telemonitoring projects with the dimensions of the Chronic Care Model. Results The projects were situated in primary and secondary health care. Their primary goal was to reduce the number of readmissions for chronic heart failure. None of these projects succeeded in a final implementation of telemonitoring in home care after the pilot phase. Not all the projects were initiated to accomplish all of the dimensions of the Chronic Care Model. A central role for the patient was sparse. Conclusion Limited financial resources hampered continuation after the pilot phase. Cooperation and coordination in telemonitoring appears to be major barriers but are, within primary care as well as between the lines of care, important links in follow-up. This discrepancy can be prohibitive for deployment of good chronic care. Chronic Care Model is recommended as basis for future. PMID:25114664

  8. Fitting correlated residual error structures in nonlinear mixed-effects models using SAS PROC NLMIXED.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2014-06-01

    Nonlinear mixed-effects (NLME) models remain popular among practitioners for analyzing continuous repeated measures data taken on each of a number of individuals when interest centers on characterizing individual-specific change. Within this framework, variation and correlation among the repeated measurements may be partitioned into interindividual variation and intraindividual variation components. The covariance structure of the residuals are, in many applications, consigned to be independent with homogeneous variances, [Formula: see text], not because it is believed that intraindividual variation adheres to this structure, but because many software programs that estimate parameters of such models are not well-equipped to handle other, possibly more realistic, patterns. In this article, we describe how the programmatic environment within SAS may be utilized to model residual structures for serial correlation and variance heterogeneity. An empirical example is used to illustrate the capabilities of the module.

  9. A modified GM-estimation for robust fitting of mixture regression models

    NASA Astrophysics Data System (ADS)

    Booppasiri, Slun; Srisodaphol, Wuttichai

    2015-02-01

    In the mixture regression models, the regression parameters are estimated by maximum likelihood estimation (MLE) via EM algorithm. Generally, maximum likelihood estimation is sensitive to outliers and heavy tailed error distribution. The robust method, M-estimation can handle outliers existing on dependent variable only for estimating regression coefficients in regression models. Moreover, GM-estimation can handle outliers existing on dependent variable and independent variables. In this study, the modified GM-estimations for estimating the regression coefficients in the mixture regression models are proposed. A Monte Carlo simulation is used to evaluate the efficiency of the proposed methods. The results show that the proposed modified GM-estimations approximate to MLE when there are no outliers and the error is normally distributed. Furthermore, our proposed methods are more efficient than the MLE, when there are leverage points.

  10. Fitting and interpreting continuous-time latent Markov models for panel data.

    PubMed

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. PMID:23740756

  11. A re-formulation of generalized linear mixed models to fit family data in genetic association studies

    PubMed Central

    Wang, Tao; He, Peng; Ahn, Kwang Woo; Wang, Xujing; Ghosh, Soumitra; Laud, Purushottam

    2015-01-01

    The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via “proc nlmixed” and “proc glimmix” in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC). PMID:25873936

  12. Testing Goodness-of-Fit for the Proportional Hazards Model based on Nested Case-Control Data

    PubMed Central

    Lu, Wenbin; Liu, Mengling; Chen, Yi-Hau

    2014-01-01

    Summary Nested case-control sampling is a popular design for large epidemiological cohort studies due to its cost effectiveness. A number of methods have been developed for the estimation of the proportional hazards model with nested case-control data; however, the evaluation of modeling assumption is less attended. In this paper, we propose a class of goodness-of-fit test statistics for testing the proportional hazards assumption based on nested case-control data. The test statistics are constructed based on asymptotically mean-zero processes derived from Samuelsen’s maximum pseudo-likelihood estimation method. In addition, we develop an innovative resampling scheme to approximate the asymptotic distribution of the test statistics while accounting for the dependent sampling scheme of nested case-control design. Numerical studies are conducted to evaluate the performance of our proposed approach, and an application to the Wilms’ Tumor Study is given to illustrate the methodology. PMID:25298193

  13. Modeling and estimation of replication fitness of human immunodeficiency virus type 1 in vitro experiments by using a growth competition assay.

    PubMed

    Wu, Hulin; Huang, Yangxin; Dykes, Carrie; Liu, Dacheng; Ma, Jingming; Perelson, Alan S; Demeter, Lisa M

    2006-03-01

    Growth competition assays have been developed to quantify the relative fitnesses of human immunodeficiency virus (HIV-1) mutants. In this article we develop mathematical models to describe viral/cellular dynamic interactions in the assay experiment, from which new competitive fitness indices or parameters are defined. These indices include the log fitness ratio (LFR), the log relative fitness (LRF), and the production rate ratio (PRR). From the population genetics perspective, we clarify the confusion and correct the inconsistency in the definition of relative fitness in the literature of HIV-1 viral fitness. The LFR and LRF are easier to estimate from the experimental data than the PRR, which was misleadingly defined as the relative fitness in recent HIV-1 research literature. Calculation and estimation methods based on two data points and multiple data points were proposed and were carefully studied. In particular, we suggest using both standard linear regression (method of least squares) and a measurement error model approach for more-accurate estimates of competitive fitness parameters from multiple data points. The developed methodologies are generally applicable to any growth competition assays. A user-friendly computational tool also has been developed and is publicly available on the World Wide Web at http://www.urmc.rochester.edu/bstools/vfitness/virusfitness.htm.

  14. Modeling and Estimation of Replication Fitness of Human Immunodeficiency Virus Type 1 In Vitro Experiments by Using a Growth Competition Assay

    PubMed Central

    Wu, Hulin; Huang, Yangxin; Dykes, Carrie; Liu, Dacheng; Ma, Jingming; Perelson, Alan S.; Demeter, Lisa M.

    2006-01-01

    Growth competition assays have been developed to quantify the relative fitnesses of human immunodeficiency virus (HIV-1) mutants. In this article we develop mathematical models to describe viral/cellular dynamic interactions in the assay experiment, from which new competitive fitness indices or parameters are defined. These indices include the log fitness ratio (LFR), the log relative fitness (LRF), and the production rate ratio (PRR). From the population genetics perspective, we clarify the confusion and correct the inconsistency in the definition of relative fitness in the literature of HIV-1 viral fitness. The LFR and LRF are easier to estimate from the experimental data than the PRR, which was misleadingly defined as the relative fitness in recent HIV-1 research literature. Calculation and estimation methods based on two data points and multiple data points were proposed and were carefully studied. In particular, we suggest using both standard linear regression (method of least squares) and a measurement error model approach for more-accurate estimates of competitive fitness parameters from multiple data points. The developed methodologies are generally applicable to any growth competition assays. A user-friendly computational tool also has been developed and is publicly available on the World Wide Web at http://www.urmc.rochester.edu/bstools/vfitness/virusfitness.htm. PMID:16474144

  15. ON THE ROBUSTNESS OF z = 0-1 GALAXY SIZE MEASUREMENTS THROUGH MODEL AND NON-PARAMETRIC FITS

    SciTech Connect

    Mosleh, Moein; Franx, Marijn; Williams, Rik J.

    2013-11-10

    We present the size-stellar mass relations of nearby (z = 0.01-0.02) Sloan Digital Sky Survey galaxies, for samples selected by color, morphology, Sérsic index n, and specific star formation rate. Several commonly employed size measurement techniques are used, including single Sérsic fits, two-component Sérsic models, and a non-parametric method. Through simple simulations, we show that the non-parametric and two-component Sérsic methods provide the most robust effective radius measurements, while those based on single Sérsic profiles are often overestimates, especially for massive red/early-type galaxies. Using our robust sizes, we show for all sub-samples that the mass-size relations are shallow at low stellar masses and steepen above ∼3-4 × 10{sup 10} M{sub ☉}. The mass-size relations for galaxies classified as late-type, low-n, and star-forming are consistent with each other, while blue galaxies follow a somewhat steeper relation. The mass-size relations of early-type, high-n, red, and quiescent galaxies all agree with each other but are somewhat steeper at the high-mass end than previous results. To test potential systematics at high redshift, we artificially redshifted our sample (including surface brightness dimming and degraded resolution) to z = 1 and re-fit the galaxies using single Sérsic profiles. The sizes of these galaxies before and after redshifting are consistent and we conclude that systematic effects in sizes and the size-mass relation at z ∼ 1 are negligible. Interestingly, since the poorer physical resolution at high redshift washes out bright galaxy substructures, single Sérsic fitting appears to provide more reliable and unbiased effective radius measurements at high z than for nearby, well-resolved galaxies.

  16. Fitting ecological process models to spatial patterns using scalewise variances and moment equations.

    PubMed

    Detto, Matteo; Muller-Landau, Helene C

    2013-04-01

    Ecological spatial patterns are structured by a multiplicity of processes acting over a wide range of scales. We propose a new method, based on the scalewise variance--that is, the variance as a function of spatial scale, calculated here with wavelet kernel functions--to disentangle the signature of processes that act at different and similar scales on observed spatial patterns. We derive exact and approximate analytical solutions for the expected scalewise variance under different individual-based, spatially explicit models for sessile organisms (e.g., plants), using moment equations. We further determine the probability distribution of independently observed scalewise variances for a given expectation, including complete spatial randomness. Thus, we provide a new analytical test of the null model of spatial randomness to understand at which scales, if any, the variance departs significantly from randomness. We also derive the likelihood function that is needed to estimate parameters of spatial models and their uncertainties from observed patterns. The methods are demonstrated through numerical examples and case studies of four tropical tree species on Barro Colorado Island, Panama. The methods developed here constitute powerful new tools for investigating effects of ecological processes on spatial point patterns and for statistical inference of process models from spatial patterns. PMID:23535623

  17. Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?

    ERIC Educational Resources Information Center

    Ghassib, Hisham B.

    2010-01-01

    The basic premise of this paper is the fact that science has become a major industry: the knowledge industry. The paper throws some light on the reasons for the transformation of science from a limited, constrained and marginal craft into a major industry. It, then, presents a productivist industrial model of knowledge production, which shows its…

  18. Are Earth System model software engineering practices fit for purpose? A case study.

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Johns, T. C.

    2009-04-01

    We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.

  19. Combining IRT and SEM: A Hybrid Model for Fitting Responses and Response Certainties

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Anguiano-Carrasco, Cristina; Demestre, Josep

    2013-01-01

    This article proposes a model-based procedure, intended for personality measures, for exploiting the auxiliary information provided by the certainty with which individuals answer every item (response certainty). This information is used to (a) obtain more accurate estimates of individual trait levels, and (b) provide a more detailed assessment of…

  20. Assessing Fit of Item Response Models Using the Information Matrix Test

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2012-01-01

    The information matrix can equivalently be determined via the expectation of the Hessian matrix or the expectation of the outer product of the score vector. The identity of these two matrices, however, is only valid in case of a correctly specified model. Therefore, differences between the two versions of the observed information matrix indicate…

  1. Fitting the Mixed Rasch Model to a Reading Comprehension Test: Identifying Reader Types

    ERIC Educational Resources Information Center

    Baghaei, Purya; Carstensen, Claus H.

    2013-01-01

    Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…

  2. A Structural Model-Based Optimal Person-Fit Procedure for Identifying Faking

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Anguiano-Carrasco, Cristina

    2013-01-01

    This article proposes a two-stage procedure aimed at identifying faking in personality tests. The procedure, which can be considered as an extension and refinement of previous item response theory (IRT)-based proposals, combines the information provided by a structural equation model (SEM) in the first stage with that provided by an IRT-based…

  3. Understanding the Listening Process: Rethinking the "One Size Fits All" Model

    ERIC Educational Resources Information Center

    Wolvin, Andrew

    2013-01-01

    Robert Bostrom's seminal contributions to listening theory and research represent an impressive legacy and provide listening scholars with important perspectives on the complexities of listening cognition and behavior. Bostrom's work provides a solid foundation on which to build models that more realistically explain how listeners function…

  4. A Rat Model System to Study Complex Disease Risks, Fitness, Aging, and Longevity

    PubMed Central

    Koch, Lauren Gerard; Britton, Steven L.; Wisløff, Ulrik

    2012-01-01

    The association between low exercise capacity and all-cause morbidity and mortality is statistically strong yet mechanistically unresolved. By connecting clinical observation with a theoretical base, we developed a working hypothesis that variation in capacity for oxygen metabolism is the central mechanistic determinant between disease and health (aerobic hypothesis). As an unbiased test, we show that two-way artificial selective breeding of rats for low and high intrinsic endurance exercise capacity also produces rats that differ for numerous disease risks including the metabolic syndrome, cardiovascular complications, premature aging, and reduced longevity. This contrasting animal model system may prove to be translationally superior, relative to more widely-used simplistic models for understanding geriatric biology and medicine. PMID:22867966

  5. Modelled Group Fitted XAFS Debye-Waller factors for Zn metalloproteins

    NASA Astrophysics Data System (ADS)

    Dimakis, Nicholas; Bunker, Grant

    2003-03-01

    X-ray Absorption Fine Structure spectroscopy is one of the few direct methods for determining the structure of metalloprotein active sites that are applicable to noncrystalline proteins in solutions and membranes. Considerable progress has been made in the calculation of photoelectron scattering aspects of XAFS,but calculation of the vibrational aspects has lagged because of the difficulty of the accurate calculations. Recently we have presented initial results that enabled practical numerical evaluation of XAFS multiple scattering Debye Waller Factors (MSDWFs) of Zn ions bound to histidines in metalloproteins. Recently we have refined our Zn-histidine model to provide more accurate first shell single scattering Debye-Waller parameters, and we have developed a model for Zn-cysteine model that described the MSDWFs enabling for the first time quantitative full single- and multiple-scattering XAFS data analysis of Zn/His/Cys sites at arbitrary temperatures, without the use of ad hoc assumptions. This opens up a wide class of important Zn proteins for study by these methods. Illustrative examples will be presented.

  6. Fitting Galaxies on GPUs

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2011-07-01

    Structural parameters are normally extracted from observed galaxies by fitting analytic light profiles to the observations. Obtaining accurate fits to high-resolution images is a computationally expensive task, requiring many model evaluations and convolutions with the imaging point spread function. While these algorithms contain high degrees of parallelism, current implementations do not exploit this property. With ever-growing volumes of observational data, an inability to make use of advances in computing power can act as a constraint on scientific outcomes. This is the motivation behind our work, which aims to implement the model-fitting procedure on a graphics processing unit (GPU). We begin by analysing the algorithms involved in model evaluation with respect to their suitability for modern many-core computing architectures like GPUs, finding them to be well-placed to take advantage of the high memory bandwidth offered by this hardware. Following our analysis, we briefly describe a preliminary implementation of the model fitting procedure using freely-available GPU libraries. Early results suggest a speed-up of around 10× over a CPU implementation. We discuss the opportunities such a speed-up could provide, including the ability to use more computationally expensive but better-performing fitting routines to increase the quality and robustness of fits.

  7. Assessment of a fictitious domain method for patient-specific biomechanical modelling of press-fit orthopaedic implantation.

    PubMed

    Kallivokas, L F; Na, S-W; Ghattas, O; Jaramaz, B

    2012-01-01

    In this article, we discuss an application of a fictitious domain method to the numerical simulation of the mechanical process induced by press-fitting cementless femoral implants in total hip replacement surgeries. Here, the primary goal is to demonstrate the feasibility of the method and its advantages over competing numerical methods for a wide range of applications for which the primary input originates from computed tomography-, magnetic resonance imaging- or other regular-grid medical imaging data. For this class of problems, the fictitious domain method is a natural choice, because it avoids the segmentation, surface reconstruction and meshing phases required by unstructured geometry-conforming simulation methods. We consider the implantation of a press-fit femoral artificial prosthesis as a prototype problem for sketching the application path of the methodology. Of concern is the assessment of the robustness and speed of the methodology, for both factors are critical if one were to consider patient-specific modelling. To this end, we report numerical results that exhibit optimal convergence rates and thus shed a favourable light on the approach. PMID:21424950

  8. The Candida albicans Pho4 Transcription Factor Mediates Susceptibility to Stress and Influences Fitness in a Mouse Commensalism Model

    PubMed Central

    Urrialde, Verónica; Prieto, Daniel; Pla, Jesús; Alonso-Monge, Rebeca

    2016-01-01

    The Pho4 transcription factor is required for growth under low environmental phosphate concentrations in Saccharomyces cerevisiae. A characterization of Candida albicans pho4 mutants revealed that these cells are more susceptible to both osmotic and oxidative stress and that this effect is diminished in the presence of 5% CO2 or anaerobiosis, reflecting the relevance of oxygen metabolism in the Pho4-mediated response. A pho4 mutant was as virulent as wild type strain when assayed in the Galleria mellonella infection model and was even more resistant to murine macrophages in ex vivo killing assays. The lack of Pho4 neither impairs the ability to colonize the murine gut nor alters the localization in the gastrointestinal tract. However, we found that Pho4 influenced the colonization of C. albicans in the mouse gut in competition assays; pho4 mutants were unable to attain high colonization levels when inoculated simultaneously with an isogenic wild type strain. Moreover, pho4 mutants displayed a reduced adherence to the intestinal mucosa in a competitive ex vivo assays with wild type cells. In vitro competitive assays also revealed defects in fitness for this mutant compared to the wild type strain. Thus, Pho4, a transcription factor involved in phosphate metabolism, is required for adaptation to stress and fitness in C. albicans. PMID:27458452

  9. The Candida albicans Pho4 Transcription Factor Mediates Susceptibility to Stress and Influences Fitness in a Mouse Commensalism Model.

    PubMed

    Urrialde, Verónica; Prieto, Daniel; Pla, Jesús; Alonso-Monge, Rebeca

    2016-01-01

    The Pho4 transcription factor is required for growth under low environmental phosphate concentrations in Saccharomyces cerevisiae. A characterization of Candida albicans pho4 mutants revealed that these cells are more susceptible to both osmotic and oxidative stress and that this effect is diminished in the presence of 5% CO2 or anaerobiosis, reflecting the relevance of oxygen metabolism in the Pho4-mediated response. A pho4 mutant was as virulent as wild type strain when assayed in the Galleria mellonella infection model and was even more resistant to murine macrophages in ex vivo killing assays. The lack of Pho4 neither impairs the ability to colonize the murine gut nor alters the localization in the gastrointestinal tract. However, we found that Pho4 influenced the colonization of C. albicans in the mouse gut in competition assays; pho4 mutants were unable to attain high colonization levels when inoculated simultaneously with an isogenic wild type strain. Moreover, pho4 mutants displayed a reduced adherence to the intestinal mucosa in a competitive ex vivo assays with wild type cells. In vitro competitive assays also revealed defects in fitness for this mutant compared to the wild type strain. Thus, Pho4, a transcription factor involved in phosphate metabolism, is required for adaptation to stress and fitness in C. albicans.

  10. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    SciTech Connect

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li E-mail: yamingy@ics.uci.edu E-mail: vkashyap@cfa.harvard.edu E-mail: jdrake@cfa.harvard.edu E-mail: meng@stat.harvard.edu

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  11. A Fully Bayesian Method for Jointly Fitting Instrumental Calibration and X-Ray Spectral Models

    NASA Astrophysics Data System (ADS)

    Xu, Jin; van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Connors, Alanna; Drake, Jeremy; Meng, Xiao-Li; Ratzlaff, Pete; Yu, Yaming

    2014-10-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is "pragmatic" in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  12. Experimentally fitted biodynamic models for pedestrian-structure interaction in walking situations

    NASA Astrophysics Data System (ADS)

    Toso, Marcelo André; Gomes, Herbert Martins; da Silva, Felipe Tavares; Pimentel, Roberto Leal

    2016-05-01

    The interaction between moving humans and structures usually occurs in slender structures in which the level of vibration is potentially high. Furthermore, there is the addition of mass to the structural system due to the presence of people and an increase in damping due to the human body´s ability to absorb vibrational energy. In this paper, a test campaign is presented to obtain parameters for a single degree of freedom (SDOF) biodynamic model that represents the action of a walking pedestrian in the vertical direction. The parameters of this model are the mass (m), damping (c) and stiffness (k). The measurements were performed on a force platform, and the inputs were the spectral acceleration amplitudes of the first three harmonics at the waist level of the test subjects and the corresponding amplitudes of the first three harmonics of the vertical ground reaction force. This leads to a system of nonlinear equations that is solved using a gradient-based optimization algorithm. A set of individuals took part in the tests to ensure inter-subject variability, and, regression expressions and an artificial neural network (ANN) were used to relate the biodynamic parameters to the pacing rate and the body mass of the pedestrians. The results showed some scatter in damping and stiffness that could not be precisely correlated with the masses and pacing rates of the subjects. The use of the ANN resulted in significant improvements in the parameter expressions with a low uncertainty. Finally, the measured vertical accelerations on a prototype footbridge show the adequacy of the numerical model for the representation of the effects of walking pedestrians on a structure. The results are consistent for many crowd densities.

  13. Wind Magnetic Clouds for 2010-2012: Model Parameter Fittings, Associated Shock Waves, and Comparisons to Earlier Periods

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.; Wu, C.-C.; Berdichevsky, D. B.; Szabo, A.

    2015-01-01

    We fitted the parameters of magnetic clouds (MCs) as identified in the Wind spacecraft data from early 2010 to the end of 2012 using the model of Lepping, Jones, and Burlaga (J. Geophys. Res. 95, 1195, 1990). The interval contains 48 MCs and 39 magnetic cloud-like (MCL) events. This work is a continuation of MC model fittings of the earlier Wind sets, including those in a recent publication, which covers 2007 to 2009. This period (2010 - 2012) mainly covers the maximum portion of Solar Cycle 24. Between the previous and current interval, we document 5.7 years of MCs observations. For this interval, the occurrence frequency of MCs markedly increased in the last third of the time. In addition, over approximately the last six years, the MC type (i.e. the profile of the magnetic-field direction within an MC, such as North-to-South, South-to-North, all South) dramatically evolved to mainly North-to-South types when compared to earlier years. Furthermore, this evolution of MC type is consistent with global solar magnetic-field changes predicted by Bothmer and Rust (Coronal Mass Ejections, 139, 1997). Model fit parameters for the MCs are listed for 2010 - 2012. For the 5.7 year interval, the observed MCs are found to be slower, weaker in estimated axial magnetic-field intensity, and shorter in duration than those of the earlier 12.3 years, yielding much lower axial magnetic-field fluxes. For about the first half of this 5.7 year period, i.e. up to the end of 2009, there were very few associated MC-driven shock waves (distinctly fewer than the long-term average of about 50 % of MCs). But since 2010, such driven shocks have increased markedly, reflecting similar statistics as the long-term averages. We estimate that 56 % of the total observed MCs have upstream shocks when the full interval of 1995 - 2012 is considered. However, only 28 % of the total number of MCLs have driven shocks over the same period. Some interplanetary shocks during the 2010 - 2012 interval are seen

  14. Wind Magnetic Clouds for 2010 - 2012: Model Parameter Fittings, Associated Shock Waves, and Comparisons to Earlier Periods

    NASA Astrophysics Data System (ADS)

    Lepping, R. P.; Wu, C.-C.; Berdichevsky, D. B.; Szabo, A.

    2015-08-01

    We fitted the parameters of magnetic clouds (MCs) as identified in the Wind spacecraft data from early 2010 to the end of 2012 using the model of Lepping, Jones, and Burlaga ( J. Geophys. Res. 95, 1195, 1990). The interval contains 48 MCs and 39 magnetic cloud-like (MCL) events. This work is a continuation of MC model fittings of the earlier Wind sets, including those in a recent publication, which covers 2007 to 2009. This period (2010 - 2012) mainly covers the maximum portion of Solar Cycle 24. Between the previous and current interval, we document 5.7 years of MCs observations. For this interval, the occurrence frequency of MCs markedly increased in the last third of the time. In addition, over approximately the last six years, the MC type ( i.e. the profile of the magnetic-field direction within an MC, such as North-to-South, South-to-North, all South) dramatically evolved to mainly North-to-South types when compared to earlier years. Furthermore, this evolution of MC type is consistent with global solar magnetic-field changes predicted by Bothmer and Rust ( Coronal Mass Ejections, 139, 1997). Model fit parameters for the MCs are listed for 2010 - 2012. For the 5.7 year interval, the observed MCs are found to be slower, weaker in estimated axial magnetic-field intensity, and shorter in duration than those of the earlier 12.3 years, yielding much lower axial magnetic-field fluxes. For about the first half of this 5.7 year period, i.e. up to the end of 2009, there were very few associated MC-driven shock waves (distinctly fewer than the long-term average of about 50 % of MCs). But since 2010, such driven shocks have increased markedly, reflecting similar statistics as the long-term averages. We estimate that 56 % of the total observed MCs have upstream shocks when the full interval of 1995 - 2012 is considered. However, only 28 % of the total number of MCLs have driven shocks over the same period. Some interplanetary shocks during the 2010 - 2012 interval are

  15. Putting structure into context: fitting of atomic models into electron microscopic and electron tomographic reconstructions.

    PubMed

    Volkmann, Niels

    2012-02-01

    A complete understanding of complex dynamic cellular processes such as cell migration or cell adhesion requires the integration of atomic level structural information into the larger cellular context. While direct atomic-level information at the cellular level remains inaccessible, electron microscopy, electron tomography and their associated computational image processing approaches have now matured to a point where sub-cellular structures can be imaged in three dimensions at the nanometer scale. Atomic-resolution information obtained by other means can be combined with this data to obtain three-dimensional models of large macromolecular assemblies in their cellular context. This article summarizes some recent advances in this field.

  16. M2M modelling of the Galactic disc via PRIMAL: fitting to Gaia error added data

    NASA Astrophysics Data System (ADS)

    Hunt, Jason A. S.; Kawata, Daisuke

    2014-09-01

    We have adapted our made-to-measure (M2M) algorithm PRIMAL to use mock Milky Way like data constructed from an N-body barred galaxy with a boxy bulge in a known dark matter potential. We use M0 giant stars as tracers, with the expected error of the ESA (European Space Agency) space astrometry mission Gaia. We demonstrate the process of constructing mock Gaia data from an N-body model, including the conversion of a galactocentric Cartesian coordinate N-body model into equatorial coordinates and how to add error to it for a single stellar type. We then describe the modifications made to PRIMAL to work with observational error. This paper demonstrates that PRIMAL can recover the radial profiles of the surface density, radial velocity dispersion, vertical velocity dispersion and mean rotational velocity of the target disc, along with the pattern speed of the bar, to a reasonable degree of accuracy despite the lack of accurate target data. We also construct mock data which take into account dust extinction and show that PRIMAL recovers the structure and kinematics of the disc reasonably well. In other words, the expected accuracy of the Gaia data is good enough for PRIMAL to recover these global properties of the disc, at least in a simplified condition, as used in this paper.

  17. PC-based differential model fitting as a support for clinical research.

    PubMed

    De Gaetano, A; Castagneto, M; Mingrone, G; Coleman, W P; Sganga, G; Tataranni, P A; Gangeri, G; Greco, A V

    1994-02-01

    A PC-based minimisation software written in C-language is described, which solves numerically both simple non-linear regression problems and problems expressed as systems of (unsolved) initial-value ordinary or partial differential equations. The software uses second-order iterated Runge-Kutta algorithm to approximate numerically the solution curves. It uses a quasi-Newton algorithm to minimize either sums of squares (weighted or unweighted) or NONMEM loss functions. Inverse Hessian approximation to the parameter dispersion and Monte Carlo generation of artificial samples are offered to test the robustness of the parameter values obtained. A real test problem is described, involving the hydrolysation of plasma Medium Chain Triglycerides to Free Fatty Acids and the uptake of these from plasma. Two competing models were evaluated, one involving linear terms for each transfer and one involving carrier-mediated, rate-limited hydrolysis and tissue absorption steps. The simpler linear model was found to be more robust and eventually used to describe the experimental data.

  18. The conceptual basis of mathematics in cardiology IV: statistics and model fitting.

    PubMed

    Bates, Jason H T; Sobel, Burton E

    2003-06-01

    This is the fourth in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas. This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to

  19. Analysis and fitting of an SIR model with host response to infection load for a plant disease

    PubMed Central

    Gilligan, C. A.; Gubbins, S.; Simons, S. A.

    1997-01-01

    We reformulate a model for botanical epidemics into an SIR form for susceptible (S), infected (I) and removed (R) plant organs, in order to examine the effects of different models for the effect of host responses to the load of infection on the production of susceptible tissue. The new formulation also allows for a decline in host susceptibility with age. The model is analysed and tested for the stem canker disease of potatoes, caused by the soil-borne fungus, Rhizoctonia solani. Using a combination of model fitting to field data and analysis of model behaviour, we show that a function for host response to the amount (load) of parasite infection is critical in the description of the temporal dynamics of susceptible and infected stems in epidemics of R. solani. Several different types of host response to infection are compared including two that allow for stimulation of the plant to produce more susceptible tissue at low levels of disease and inhibition at higher levels. We show that when the force of infection decays with time, due to increasing resistance of the host, the equilibrium density of susceptible stems depends on the parameters and initial conditions. The models differ in sensitivity to small changes in disease transmission with some showing marked qualitative changes leading to a flush of susceptible stems at low levels of disease transmission. We conclude that there is no evidence to reject an SIR model with a simpler linear term for the effect of infection load on the production of healthy tissue, even though biological considerations suggest greater complexity in the relationship between disease and growth. We show that reduction in initial inoculum density, and hence in the force of infection, is effective in controlling disase when the simple model applies.

  20. Adipose Tissue - Adequate, Accessible Regenerative Material

    PubMed Central

    Kolaparthy, Lakshmi Kanth.; Sanivarapu, Sahitya; Moogla, Srinivas; Kutcham, Rupa Sruthi

    2015-01-01

    The potential use of stem cell based therapies for the repair and regeneration of various tissues offers a paradigm shift that may provide alternative therapeutic solutions for a number of diseases. The use of either embryonic stem cells (ESCs) or induced pluripotent stem cells in clinical situations is limited due to cell regulations and to technical and ethical considerations involved in genetic manipulation of human ESCs, even though these cells are highly beneficial. Mesenchymal stem cells seen to be an ideal population of stem cells in particular, Adipose derived stem cells (ASCs) which can be obtained in large number and easily harvested from adipose tissue. It is ubiquitously available and has several advantages compared to other sources as easily accessible in large quantities with minimal invasive harvesting procedure, and isolation of adipose derived mesenchymal stem cells yield a high amount of stem cells which is essential for stem cell based therapies and tissue engineering. Recently, periodontal tissue regeneration using ASCs has been examined in some animal models. This method has potential in the regeneration of functional periodontal tissues because various secreted growth factors from ASCs might not only promote the regeneration of periodontal tissues but also encourage neovascularization of the damaged tissues. This review summarizes the sources, isolation and characteristics of adipose derived stem cells and its potential role in periodontal regeneration is discussed. PMID:26634060

  1. Adipose Tissue - Adequate, Accessible Regenerative Material.

    PubMed

    Kolaparthy, Lakshmi Kanth; Sanivarapu, Sahitya; Moogla, Srinivas; Kutcham, Rupa Sruthi

    2015-11-01

    The potential use of stem cell based therapies for the repair and regeneration of various tissues offers a paradigm shift that may provide alternative therapeutic solutions for a number of diseases. The use of either embryonic stem cells (ESCs) or induced pluripotent stem cells in clinical situations is limited due to cell regulations and to technical and ethical considerations involved in genetic manipulation of human ESCs, even though these cells are highly beneficial. Mesenchymal stem cells seen to be an ideal population of stem cells in particular, Adipose derived stem cells (ASCs) which can be obtained in large number and easily harvested from adipose tissue. It is ubiquitously available and has several advantages compared to other sources as easily accessible in large quantities with minimal invasive harvesting procedure, and isolation of adipose derived mesenchymal stem cells yield a high amount of stem cells which is essential for stem cell based therapies and tissue engineering. Recently, periodontal tissue regeneration using ASCs has been examined in some animal models. This method has potential in the regeneration of functional periodontal tissues because various secreted growth factors from ASCs might not only promote the regeneration of periodontal tissues but also encourage neovascularization of the damaged tissues. This review summarizes the sources, isolation and characteristics of adipose derived stem cells and its potential role in periodontal regeneration is discussed. PMID:26634060

  2. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  3. A MULTIVARIATE FIT LUMINOSITY FUNCTION AND WORLD MODEL FOR LONG GAMMA-RAY BURSTS

    SciTech Connect

    Shahmoradi, Amir

    2013-04-01

    It is proposed that the luminosity function, the rest-frame spectral correlations, and distributions of cosmological long-duration (Type-II) gamma-ray bursts (LGRBs) may be very well described as a multivariate log-normal distribution. This result is based on careful selection, analysis, and modeling of LGRBs' temporal and spectral variables in the largest catalog of GRBs available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects. Constraints on the joint rest-frame distribution of the isotropic peak luminosity (L{sub iso}), total isotropic emission (E{sub iso}), the time-integrated spectral peak energy (E{sub p,z}), and duration (T{sub 90,z}) of LGRBs are derived. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by the BATSE detector with E{sub iso} extending down to {approx}10{sup 49} erg and observed spectral peak energies (E{sub p} ) as low as {approx}5 keV. LGRBs with rest-frame duration T{sub 90,z} {approx}< 1 s or observer-frame duration T{sub 90} {approx}< 2 s appear to be rare events ({approx}< 0.1% chance of occurrence). The model predicts a fairly strong but highly significant correlation ({rho} = 0.58 {+-} 0.04) between E{sub iso} and E{sub p,z} of LGRBs. Also predicted are strong correlations of L{sub iso} and E{sub iso} with T{sub 90,z} and moderate correlation between L{sub iso} and E{sub p,z}. The strength and significance of the correlations found encourage the search for underlying mechanisms, though undermine their capabilities as probes of dark energy's equation of state at high redshifts. The presented analysis favors-but does not necessitate-a cosmic rate for BATSE LGRBs tracing metallicity evolution consistent with a cutoff Z/Z{sub Sun} {approx} 0.2-0.5, assuming no luminosity-redshift evolution.

  4. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models.

    PubMed

    Lee, Tsair-Fwu; Lin, Wei-Chun; Wang, Hung-Yu; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Ting, Hui-Min; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3-169.7 mV), γ 50 = 0.84 (CI: 0.78-0.90) and TV50 = 155.6 mV (CI: 138.9-172.4 mV), m = 0.54 (CI: 0.49-0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  5. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation. PMID:27391255

  6. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    PubMed Central

    Lee, Tsair-Fwu; Lin, Wei-Chun; Wang, Hung-Yu; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Ting, Hui-Min; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  7. Differential interferometry of QSO broad-line regions - I. Improving the reverberation mapping model fits and black hole mass estimates

    NASA Astrophysics Data System (ADS)

    Rakshit, Suvendu; Petrov, Romain G.; Meilland, Anthony; Hönig, Sebastian F.

    2015-03-01

    Reverberation mapping (RM) estimates the size and kinematics of broad-line regions (BLR) in quasars and type I AGNs. It yields size-luminosity relation to make QSOs standard cosmological candles, and mass-luminosity relation to study the evolution of black holes and galaxies. The accuracy of these relations is limited by the unknown geometry of the BLR clouds distribution and velocities. We analyse the independent BLR structure constraints given by super-resolving differential interferometry. We developed a three-dimensional BLR model to compute all differential interferometry and RM signals. We extrapolate realistic noises from our successful observations of the QSO 3C 273 with AMBER on the VLTI. These signals and noises quantify the differential interferometry capacity to discriminate and measure BLR parameters including angular size, thickness, spatial distribution of clouds, local-to-global and radial-to-rotation velocity ratios, and finally central black hole mass and BLR distance. A Markov Chain Monte Carlo model-fit, of data simulated for various VLTI instruments, gives mass accuracies between 0.06 and 0.13 dex, to be compared to 0.44 dex for RM mass-luminosity fits. We evaluate the number of QSOs accessible to observe with current (AMBER), upcoming (GRAVITY) and possible (OASIS with new generation fringe trackers) VLTI instruments. With available technology, the VLTI could resolve more than 60 BLRs, with a luminosity range larger than four decades, sufficient for a good calibration of RM mass-luminosity laws, from an analysis of the variation of BLR parameters with luminosity.

  8. The effects of coping on adjustment: Re-examining the goodness of fit model of coping effectiveness.

    PubMed

    Masel, C N; Terry, D J; Gribble, M

    1996-01-01

    Abstract The primary aim of the present study was to examine the extent to which the effects of coping on adjustment are moderated by levels of event controllability. Specifically, the research tested two revisions to the goodness of fit model of coping effectiveness. First, it was hypothesized that the effects of problem management coping (but not problem appraisal coping) would be moderated by levels of event controllability. Second, it was hypothesized that the effects of emotion-focused coping would be moderated by event controllability, but only in the acute phase of a stressful encounter. To test these predictions, a longitudinal study was undertaken (185 undergraduate students participated in all three stages of the research). Measures of initial adjustment (low depression and coping efficacy) were obtained at Time 1. Four weeks later (Time 2), coping responses to a current or a recent stressor were assessed. Based on subjects' descriptions of the event, objective and subjective measures of event controllability were also obtained. Measures of concurrent and subsequent adjustment were obtained at Times 2 and 3 (two weeks later), respectively. There was only weak support for the goodness of fit model of coping effectiveness. The beneficial effects of a high proportion of problem management coping (relative to total coping efforts) on Time 3 perceptions of coping efficacy were more evident in high control than in low control situations. Other results of the research revealed that, irrespective of the controllability of the event, problem appraisal coping strategies and emotion-focused strategies (escapism and self-denigration) were associated with high and low levels of concurrent adjustment, respectively. The effects of these coping responses on subsequent adjustment were mediated through concurrent levels of adjustment.

  9. Stress physiology in marine mammals: how well do they fit the terrestrial model?

    PubMed

    Atkinson, Shannon; Crocker, Daniel; Houser, Dorian; Mashburn, Kendall

    2015-07-01

    Stressors are commonly accepted as the causal factors, either internal or external, that evoke physiological responses to mediate the impact of the stressor. The majority of research on the physiological stress response, and costs incurred to an animal, has focused on terrestrial species. This review presents current knowledge on the physiology of the stress response in a lesser studied group of mammals, the marine mammals. Marine mammals are an artificial or pseudo grouping from a taxonomical perspective, as this group represents several distinct and diverse orders of mammals. However, they all are fully or semi-aquatic animals and have experienced selective pressures that have shaped their physiology in a manner that differs from terrestrial relatives. What these differences are and how they relate to the stress response is an efflorescent topic of study. The identification of the many facets of the stress response is critical to marine mammal management and conservation efforts. Anthropogenic stressors in marine ecosystems, including ocean noise, pollution, and fisheries interactions, are increasing and the dramatic responses of some marine mammals to these stressors have elevated concerns over the impact of human-related activities on a diverse group of animals that are difficult to monitor. This review covers the physiology of the stress response in marine mammals and places it in context of what is known from research on terrestrial mammals, particularly with respect to mediator activity that diverges from generalized terrestrial models. Challenges in conducting research on stress physiology in marine mammals are discussed and ways to overcome these challenges in the future are suggested.

  10. Stress physiology in marine mammals: how well do they fit the terrestrial model?

    PubMed

    Atkinson, Shannon; Crocker, Daniel; Houser, Dorian; Mashburn, Kendall

    2015-07-01

    Stressors are commonly accepted as the causal factors, either internal or external, that evoke physiological responses to mediate the impact of the stressor. The majority of research on the physiological stress response, and costs incurred to an animal, has focused on terrestrial species. This review presents current knowledge on the physiology of the stress response in a lesser studied group of mammals, the marine mammals. Marine mammals are an artificial or pseudo grouping from a taxonomical perspective, as this group represents several distinct and diverse orders of mammals. However, they all are fully or semi-aquatic animals and have experienced selective pressures that have shaped their physiology in a manner that differs from terrestrial relatives. What these differences are and how they relate to the stress response is an efflorescent topic of study. The identification of the many facets of the stress response is critical to marine mammal management and conservation efforts. Anthropogenic stressors in marine ecosystems, including ocean noise, pollution, and fisheries interactions, are increasing and the dramatic responses of some marine mammals to these stressors have elevated concerns over the impact of human-related activities on a diverse group of animals that are difficult to monitor. This review covers the physiology of the stress response in marine mammals and places it in context of what is known from research on terrestrial mammals, particularly with respect to mediator activity that diverges from generalized terrestrial models. Challenges in conducting research on stress physiology in marine mammals are discussed and ways to overcome these challenges in the future are suggested. PMID:25913694

  11. Errors associated with three methods of assessing respirator fit.

    PubMed

    Coffey, Christopher C; Lawrence, Robert B; Zhuang, Ziqing; Duling, Matthew G; Campbell, Donald L

    2006-01-01

    Three fit test methods (Bitrex, saccharin, and TSI PortaCount Plus with the N95-Companion) were evaluated for their ability to identify wearers of respirators that do not provide adequate protection during a simulated workplace test. Thirty models of NIOSH-certified N95 half-facepiece respirators (15 filtering-facepiece models and 15 elastomeric models) were tested by a panel of 25 subjects using each of the three fit testing methods. Fit testing results were compared to 5th percentiles of simulated workplace protection factors. Alpha errors (the chance of failing a fit test in error) for all 30 respirators were 71% for the Bitrex method, 68% for the saccharin method, and 40% for the Companion method. Beta errors (the chance of passing a fit test in error) for all 30 respirator models combined were 8% for the Bitrex method, 8% for the saccharin method, and 9% for the Companion method. The three fit test methods had different error rates when assessed with filtering facepieces and when assessed with elastomeric respirators. For example, beta errors for the three fit test methods assessed with the 15 filtering facepiece respirators were < or = 5% but ranged from 14% to 21% when assessed with the 15 elastomeric respirators. To predict what happens in a realistic fit testing program, the data were also used to estimate the alpha and beta errors for a simulated respiratory protection program in which a wearer is given up to three trials with one respirator model to pass a fit test before moving onto another model. A subject passing with any of the three methods was considered to have passed the fit test program. The alpha and beta errors for the fit testing in this simulated respiratory protection program were 29% and 19%, respectively. Thus, it is estimated, under the conditions of the simulation, that roughly one in three respirator wearers receiving the expected reduction in exposure (with a particular model) will fail to pass (with that particular model), and that

  12. An Investigation of the Performance of the Generalized S-X[superscript 2] Item-Fit Index for Polytomous IRT Models. ACT Research Report Series, 2007-1

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2007-01-01

    Orlando and Thissen (2000, 2003) proposed an item-fit index, S-X[superscript 2], for dichotomous item response theory (IRT) models, which has performed better than traditional item-fit statistics such as Yen's (1981) Q[subscript 1] and McKinley and Mill's (1985) G[superscript 2]. This study extends the utility of S-X[superscript 2] to polytomous…

  13. How to measure inclusive fitness.

    PubMed

    Creel, S

    1990-09-22

    Although inclusive fitness (Hamilton 1964) is regarded as the basic currency of natural selection, difficulty in applying inclusive fitness theory to field studies persists, a quarter-century after its introduction (Grafen 1982, 1984; Brown 1987). For instance, strict application of the original (and currently accepted) definition of inclusive fitness predicts that no one should ever attempt to breed among obligately cooperative breeders. Much of this confusion may have arisen because Hamilton's (1964) original verbal definition of inclusive fitness was not in complete accord with his justifying model. By re-examining Hamilton's original model, a modified verbal definition of inclusive fitness can be justified.

  14. Fitting characteristics of eighteen N95 filtering-facepiece respirators.

    PubMed

    Coffey, Christopher C; Lawrence, Robert B; Campbell, Donald L; Zhuang, Ziqing; Calvert, Catherine A; Jensen, Paul A

    2004-04-01

    Four performance measures were used to evaluate the fitting characteristics of 18 models of N95 filtering-facepiece respirators: (1) the 5th percentile simulated workplace protection factor (SWPF) value, (2) the shift average SWPF value, (3) the h-value, and (4) the assignment error. The effect of fit-testing on the level of protection provided by the respirators was also evaluated. The respirators were tested on a panel of 25 subjects with various face sizes. Simulated workplace protection factor values, determined from six total penetration (face-seal leakage plus filter penetration) tests with re-donning between each test, were used to indicate respirator performance. Five fit-tests were used: Bitrex, saccharin, generated aerosol corrected for filter penetration, PortaCount Plus corrected for filter penetration, and the PortaCount Plus with the N95-Companion accessory. Without fit-testing, the 5th percentile SWPF for all models combined was 2.9 with individual model values ranging from 1.3 to 48.0. Passing a fit-test generally resulted in an increase in protection. In addition, the h-value of each respirator was computed. The h-value has been determined to be the population fraction of individuals who will obtain an adequate level of protection (i.e., SWPF >/=10, which is the expected level of protection for half-facepiece respirators) when a respirator is selected and donned (including a user seal check) in accordance with the manufacturer's instructions without fit-testing. The h-value for all models combined was 0.74 (i.e., 74% of all donnings resulted in an adequate level of protection), with individual model h-values ranging from 0.31 to 0.99. Only three models had h-values above 0.95. Higher SWPF values were achieved by excluding SWPF values determined for test subject/respirator combinations that failed a fit-test. The improvement was greatest for respirator models with lower h-values. Using the concepts of shift average and assignment error to measure

  15. Global fits of the two-loop renormalized Two-Higgs-Doublet model with soft Z 2 breaking

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debtosh; Eberhardt, Otto

    2015-11-01

    We determine the next-to-leading order renormalization group equations for the Two-Higgs-Doublet model with a softly broken Z 2 symmetry and CP conservation in the scalar potential. We use them to identify the parameter regions which are stable up to the Planck scale and find that in this case the quartic couplings of the Higgs potential cannot be larger than 1 in magnitude and that the absolute values of the S-matrix eigenvalues cannot exceed 2 .5 at the electroweak symmetry breaking scale. Interpreting the 125 GeV resonance as the light CP -even Higgs eigenstate, we combine stability constraints, electroweak precision and flavour observables with the latest ATLAS and CMS data on Higgs signal strengths and heavy Higgs searches in global parameter fits to all four types of Z 2 symmetry. We quantify the maximal deviations from the alignment limit and find that in type II and Y the mass of the heavy CP -even ( CP -odd) scalar cannot be smaller than 340 GeV (360 GeV). Also, we pinpoint the physical parameter regions compatible with a stable scalar potential up to the Planck scale. Motivated by the question how natural a Higgs mass of 125 GeV can be in the context of a Two-Higgs-Doublet model, we also address the hierarchy problem and find that the Two-Higgs-Doublet model does not offer a perturbative solution to it beyond 5 TeV.

  16. Expanding vaccine efficacy estimation with dynamic models fitted to cross-sectional prevalence data post-licensure.

    PubMed

    Gjini, Erida; Gomes, M Gabriela M

    2016-03-01

    The efficacy of vaccines is typically estimated prior to implementation, on the basis of randomized controlled trials. This does not preclude, however, subsequent assessment post-licensure, while mass-immunization and nonlinear transmission feedbacks are in place. In this paper we show how cross-sectional prevalence data post-vaccination can be interpreted in terms of pathogen transmission processes and vaccine parameters, using a dynamic epidemiological model. We advocate the use of such frameworks for model-based vaccine evaluation in the field, fitting trajectories of cross-sectional prevalence of pathogen strains before and after intervention. Using SI and SIS models, we illustrate how prevalence ratios in vaccinated and non-vaccinated hosts depend on true vaccine efficacy, the absolute and relative strength of competition between target and non-target strains, the time post follow-up, and transmission intensity. We argue that a mechanistic approach should be added to vaccine efficacy estimation against multi-type pathogens, because it naturally accounts for inter-strain competition and indirect effects, leading to a robust measure of individual protection per contact. Our study calls for systematic attention to epidemiological feedbacks when interpreting population level impact. At a broader level, our parameter estimation procedure provides a promising proof of principle for a generalizable framework to infer vaccine efficacy post-licensure. PMID:26972516

  17. A global fit of the γ-ray galactic center excess within the scalar singlet Higgs portal model

    NASA Astrophysics Data System (ADS)

    Cuoco, Alessandro; Eiteneuer, Benedikt; Heisig, Jan; Krämer, Michael

    2016-06-01

    We analyse the excess in the γ-ray emission from the center of our galaxy observed by Fermi-LAT in terms of dark matter annihilation within the scalar Higgs portal model. In particular, we include the astrophysical uncertainties from the dark matter distribution and allow for unspecified additional dark matter components. We demonstrate through a detailed numerical fit that the strength and shape of the γ-ray spectrum can indeed be described by the model in various regions of dark matter masses and couplings. Constraints from invisible Higgs decays, direct dark matter searches, indirect searches in dwarf galaxies and for γ-ray lines, and constraints from the dark matter relic density reduce the parameter space to dark matter masses near the Higgs resonance. We find two viable regions: one where the Higgs-dark matter coupling is of Script O(10-2), and an additional dark matter component beyond the scalar WIMP of our model is preferred, and one region where the Higgs-dark matter coupling may be significantly smaller, but where the scalar WIMP constitutes a significant fraction or even all of dark matter. Both viable regions are hard to probe in future direct detection and collider experiments.

  18. Analysis and Modeling of Threatening Factors of Workforce’s Health in Large-Scale Workplaces: Comparison of Four-Fitting Methods to select optimum technique

    PubMed Central

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Savareh, Behrouz Alizadeh

    2016-01-01

    Introduction Workforce is one of the pillars of development in any country. Therefore, the workforce’s health is very important, and analyzing its threatening factors is one of the fundamental steps for health planning. This study was the first part of a comprehensive study aimed at comparing the fitting methods to analyze and model the factors threatening health in occupational injuries. Methods In this study, 980 human occupational injuries in 10 Iranian large-scale workplaces within 10 years (2005–2014) were analyzed and modeled based on the four fitting methods: linear regression, regression analysis, generalized linear model, and artificial neural networks (ANN) using IBM SPSS Modeler 14.2. Results Accident Severity Rate (ASR) of occupational injuries was 557.47 ± 397.87. The results showed that the mean of age and work experience of injured workers were 27.82 ± 5.23 and 4.39 ± 3.65 years, respectively. Analysis of health-threatening factors showed that some factors, including age, quality of provided H&S training, number of workers, hazard identification (HAZID), and periodic risk assessment, and periodic H&S training were important factors that affected ASR. In addition, the results of comparison of the four fitting methods showed that the correlation coefficient of ANN (R = 0.968) and the relative error (R.E) of ANN (R.E = 0.063) were the highest and lowest, respectively, among other fitting methods. Conclusion The findings of the present study indicated that, despite the suitability and effectiveness of all fitting methods in analyzing severity of occupational injuries, ANN is the best fitting method for modeling of the threatening factors of a workforce’s health. Furthermore, all fitting methods, especially ANN, should be considered more in analyzing and modeling of occupational injuries and health-threatening factors as well as planning to provide and improve the workforce’s health. PMID:27053999

  19. Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model

    NASA Astrophysics Data System (ADS)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2016-06-01

    A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)

  20. Power and Sample Size for the Root Mean Square Error of Approximation Test of Not Close Fit in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Freeman, Mara J.

    2001-01-01

    Provides select power and sample size tables and interpolation strategies associated with the root mean square error of approximation test of not close fit under standard assumed conditions. The goal is to inform researchers conducting structural equation modeling about power limitations when testing a model. (SLD)