Science.gov

Sample records for fitting atomic models

  1. Note: curve fit models for atomic force microscopy cantilever calibration in water.

    PubMed

    Kennedy, Scott J; Cole, Daniel G; Clark, Robert L

    2011-11-01

    Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%.

  2. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions.

    PubMed

    Siebert, Xavier; Navaza, Jorge

    2009-07-01

    Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30-10 A range and sometimes even beyond 10 A. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/.

  3. Putting structure into context: fitting of atomic models into electron microscopic and electron tomographic reconstructions.

    PubMed

    Volkmann, Niels

    2012-02-01

    A complete understanding of complex dynamic cellular processes such as cell migration or cell adhesion requires the integration of atomic level structural information into the larger cellular context. While direct atomic-level information at the cellular level remains inaccessible, electron microscopy, electron tomography and their associated computational image processing approaches have now matured to a point where sub-cellular structures can be imaged in three dimensions at the nanometer scale. Atomic-resolution information obtained by other means can be combined with this data to obtain three-dimensional models of large macromolecular assemblies in their cellular context. This article summarizes some recent advances in this field.

  4. Use of evolutionary information in the fitting of atomic level protein models in low resolution cryo-EM map of a protein assembly improves the accuracy of the fitting.

    PubMed

    Joseph, Agnel P; Swapna, Lakshmipuram S; Rakesh, Ramachandran; Srinivasan, Narayanaswamy

    2016-09-01

    Protein-protein interface residues, especially those at the core of the interface, exhibit higher conservation than residues in solvent exposed regions. Here, we explore the ability of this differential conservation to evaluate fittings of atomic models in low-resolution cryo-EM maps and select models from the ensemble of solutions that are often proposed by different model fitting techniques. As a prelude, using a non-redundant and high-resolution structural dataset involving 125 permanent and 95 transient complexes, we confirm that core interface residues are conserved significantly better than nearby non-interface residues and this result is used in the cryo-EM map analysis. From the analysis of inter-component interfaces in a set of fitted models associated with low-resolution cryo-EM maps of ribosomes, chaperones and proteasomes we note that a few poorly conserved residues occur at interfaces. Interestingly a few conserved residues are not in the interface, though they are close to the interface. These observations raise the potential requirement of refitting the models in the cryo-EM maps. We show that sampling an ensemble of models and selection of models with high residue conservation at the interface and in good agreement with the density helps in improving the accuracy of the fit. This study indicates that evolutionary information can serve as an additional input to improve and validate fitting of atomic models in cryo-EM density maps. PMID:27444391

  5. Use of evolutionary information in the fitting of atomic level protein models in low resolution cryo-EM map of a protein assembly improves the accuracy of the fitting.

    PubMed

    Joseph, Agnel P; Swapna, Lakshmipuram S; Rakesh, Ramachandran; Srinivasan, Narayanaswamy

    2016-09-01

    Protein-protein interface residues, especially those at the core of the interface, exhibit higher conservation than residues in solvent exposed regions. Here, we explore the ability of this differential conservation to evaluate fittings of atomic models in low-resolution cryo-EM maps and select models from the ensemble of solutions that are often proposed by different model fitting techniques. As a prelude, using a non-redundant and high-resolution structural dataset involving 125 permanent and 95 transient complexes, we confirm that core interface residues are conserved significantly better than nearby non-interface residues and this result is used in the cryo-EM map analysis. From the analysis of inter-component interfaces in a set of fitted models associated with low-resolution cryo-EM maps of ribosomes, chaperones and proteasomes we note that a few poorly conserved residues occur at interfaces. Interestingly a few conserved residues are not in the interface, though they are close to the interface. These observations raise the potential requirement of refitting the models in the cryo-EM maps. We show that sampling an ensemble of models and selection of models with high residue conservation at the interface and in good agreement with the density helps in improving the accuracy of the fit. This study indicates that evolutionary information can serve as an additional input to improve and validate fitting of atomic models in cryo-EM density maps.

  6. Improving the Accuracy of Fitted Atomic Models in Cryo-EM Density Maps of Protein Assemblies Using Evolutionary Information from Aligned Homologous Proteins.

    PubMed

    Rakesh, Ramachandran; Srinivasan, Narayanaswamy

    2016-01-01

    Cryo-Electron Microscopy (cryo-EM) has become an important technique to obtain structural insights into large macromolecular assemblies. However the resolution of the density maps do not allow for its interpretation at atomic level. Hence they are combined with high resolution structures along with information from other experimental or bioinformatics techniques to obtain pseudo-atomic models. Here, we describe the use of evolutionary conservation of residues as obtained from protein structures and alignments of homologous proteins to detect errors in the fitting of atomic structures as well as improve accuracy of the protein-protein interfacial regions in the cryo-EM density maps.

  7. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  8. Fitting and Interpreting Occupancy Models

    PubMed Central

    Welsh, Alan H.; Lindenmayer, David B.; Donnelly, Christine F.

    2013-01-01

    We show that occupancy models are more difficult to fit than is generally appreciated because the estimating equations often have multiple solutions, including boundary estimates which produce fitted probabilities of zero or one. The estimates are unstable when the data are sparse, making them difficult to interpret, and, even in ideal situations, highly variable. As a consequence, making accurate inference is difficult. When abundance varies over sites (which is the general rule in ecology because we expect spatial variance in abundance) and detection depends on abundance, the standard analysis suffers bias (attenuation in detection, biased estimates of occupancy and potentially finding misleading relationships between occupancy and other covariates), asymmetric sampling distributions, and slow convergence of the sampling distributions to normality. The key result of this paper is that the biases are of similar magnitude to those obtained when we ignore non-detection entirely. The fact that abundance is subject to detection error and hence is not directly observable, means that we cannot tell when bias is present (or, equivalently, how large it is) and we cannot adjust for it. This implies that we cannot tell which fit is better: the fit from the occupancy model or the fit ignoring the possibility of detection error. Therefore trying to adjust occupancy models for non-detection can be as misleading as ignoring non-detection completely. Ignoring non-detection can actually be better than trying to adjust for it. PMID:23326323

  9. BCL::EM-Fit: rigid body fitting of atomic structures into density maps using geometric hashing and real space refinement.

    PubMed

    Woetzel, Nils; Lindert, Steffen; Stewart, Phoebe L; Meiler, Jens

    2011-09-01

    Cryo-electron microscopy (cryoEM) can visualize large macromolecular assemblies at resolutions often below 10Å and recently as good as 3.8-4.5 Å. These density maps provide important insights into the biological functioning of molecular machineries such as viruses or the ribosome, in particular if atomic-resolution crystal structures or models of individual components of the assembly can be placed into the density map. The present work introduces a novel algorithm termed BCL::EM-Fit that accurately fits atomic-detail structural models into medium resolution density maps. In an initial step, a "geometric hashing" algorithm provides a short list of likely placements. In a follow up Monte Carlo/Metropolis refinement step, the initial placements are optimized by their cross correlation coefficient. The resolution of density maps for a reliable fit was determined to be 10 Å or better using tests with simulated density maps. The algorithm was applied to fitting of capsid proteins into an experimental cryoEM density map of human adenovirus at a resolution of 6.8 and 9.0 Å, and fitting of the GroEL protein at 5.4 Å. In the process, the handedness of the cryoEM density map was unambiguously identified. The BCL::EM-Fit algorithm offers an alternative to the established Fourier/Real space fitting programs. BCL::EM-Fit is free for academic use and available from a web server or as downloadable binary file at http://www.meilerlab.org.

  10. BCL::EM-Fit: Rigid body fitting of atomic structures into density maps using geometric hashing and real space refinement

    PubMed Central

    Woetzel, Nils; Lindert, Steffen; Stewart, Phoebe L.; Meiler, Jens

    2011-01-01

    Cryo-Electron Microscopy can visualize large macromolecular assemblies at resolutions often below 10 Å and recently as good as 3.8–4.5 Å. These density maps provide important insights into the biological functioning of molecular machineries such as viruses or the ribosome, in particular if atomic-resolution crystal structures or models of individual components of the assembly can be placed into the density map. The present work introduces a novel algorithm termed BCL::EM-Fit that accurately fits atomic-detail structural models into medium resolution density maps. In an initial step, a “geometric hashing” algorithm provides a short list of likely placements. In a follow up Monte Carlo/Metropolis refinement step, the initial placements are optimized by their cross correlation coefficient. The resolution of density maps for a reliable fit was determined to be 10 Å or better using tests with simulated density maps. The algorithm was applied to fitting of capsid proteins into an experimental cryoEM density map of human adenovirus at a resolution of 6.8 and 9.0 Å, and fitting of the GroEL protein at 5.4 Å. In the process, the handedness of the cryoEM density map was unambiguously identified. The BCL::EM-Fit algorithm offers an alternative to the established Fourier/Real space fitting programs. BCL::EM-Fit is free for academic use and available from a webserver or as downloadable binary file at http://www.meilerlab.org. PMID:21565271

  11. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  12. Total force fitness: the military family fitness model.

    PubMed

    Bowles, Stephen V; Pollock, Liz Davenport; Moore, Monique; Wadsworth, Shelley MacDermid; Cato, Colanda; Dekle, Judith Ward; Meyer, Sonia Wei; Shriver, Amber; Mueller, Bill; Stephens, Mark; Seidler, Dustin A; Sheldon, Joseph; Picano, James; Finch, Wanda; Morales, Ricardo; Blochberger, Sean; Kleiman, Matthew E; Thompson, Daniel; Bates, Mark J

    2015-03-01

    The military lifestyle can create formidable challenges for military families. This article describes the Military Family Fitness Model (MFFM), a comprehensive model aimed at enhancing family fitness and resilience across the life span. This model is intended for use by Service members, their families, leaders, and health care providers but also has broader applications for all families. The MFFM has three core components: (1) family demands, (2) resources (including individual resources, family resources, and external resources), and (3) family outcomes (including related metrics). The MFFM proposes that resources from the individual, family, and external areas promote fitness, bolster resilience, and foster well-being for the family. The MFFM highlights each resource level for the purpose of improving family fitness and resilience over time. The MFFM both builds on existing family strengths and encourages the development of new family strengths through resource-acquiring behaviors. The purpose of this article is to (1) expand the military's Total Force Fitness (TFF) intent as it relates to families and (2) offer a family fitness model. This article will summarize relevant evidence, provide supportive theory, describe the model, and proffer metrics that support the dimensions of this model.

  13. Total force fitness: the military family fitness model.

    PubMed

    Bowles, Stephen V; Pollock, Liz Davenport; Moore, Monique; Wadsworth, Shelley MacDermid; Cato, Colanda; Dekle, Judith Ward; Meyer, Sonia Wei; Shriver, Amber; Mueller, Bill; Stephens, Mark; Seidler, Dustin A; Sheldon, Joseph; Picano, James; Finch, Wanda; Morales, Ricardo; Blochberger, Sean; Kleiman, Matthew E; Thompson, Daniel; Bates, Mark J

    2015-03-01

    The military lifestyle can create formidable challenges for military families. This article describes the Military Family Fitness Model (MFFM), a comprehensive model aimed at enhancing family fitness and resilience across the life span. This model is intended for use by Service members, their families, leaders, and health care providers but also has broader applications for all families. The MFFM has three core components: (1) family demands, (2) resources (including individual resources, family resources, and external resources), and (3) family outcomes (including related metrics). The MFFM proposes that resources from the individual, family, and external areas promote fitness, bolster resilience, and foster well-being for the family. The MFFM highlights each resource level for the purpose of improving family fitness and resilience over time. The MFFM both builds on existing family strengths and encourages the development of new family strengths through resource-acquiring behaviors. The purpose of this article is to (1) expand the military's Total Force Fitness (TFF) intent as it relates to families and (2) offer a family fitness model. This article will summarize relevant evidence, provide supportive theory, describe the model, and proffer metrics that support the dimensions of this model. PMID:25735013

  14. Scaled models, scaled frequencies, and model fitting

    NASA Astrophysics Data System (ADS)

    Roxburgh, Ian W.

    2015-12-01

    I show that given a model star of mass M, radius R, and density profile ρ(x) [x = r/R], there exists a two parameter family of models with masses Mk, radii Rk, density profile ρk(x) = λρ(x) and frequencies νknℓ = λ1/2νnℓ, where λ,Rk/RA are scaling factors. These models have different internal structures, but all have the same value of separation ratios calculated at given radial orders n, and all exactly satisfy a frequency matching algorithm with an offset function determined as part of the fitting procedure. But they do not satisfy ratio matching at given frequencies nor phase shift matching. This illustrates that erroneous results may be obtained when model fitting with ratios at given n values or frequency matching. I give examples from scaled models and from non scaled evolutionary models.

  15. Computer Modeling Of Atomization

    NASA Technical Reports Server (NTRS)

    Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.

    1994-01-01

    Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.

  16. Coaches as Fitness Role Models

    ERIC Educational Resources Information Center

    Nichols, Randall; Zillifro, Traci D.; Nichols, Ronald; Hull, Ethan E.

    2012-01-01

    The lack of physical activity, low fitness levels, and elevated obesity rates as high as 32% of today's youth are well documented. Many strategies and grants have been developed at the national, regional, and local levels to help counteract these current trends. Strategies have been developed and implemented for schools, households (parents), and…

  17. Semiclassical model for atoms

    PubMed Central

    Pearson, Ralph G.

    1981-01-01

    The energies of several two- and three-electron atoms, in both ground states and excited states, are calculated by a very simple semiclassical model. The only change from Bohr's original method is to replace definite orbits by probability distribution functions based on classical dynamics. The energies are better than Hartree-Fock values. There is still a need for an exchange-energy correction. Images PMID:16593047

  18. Sensitivity of Fit Indices to Model Misspecification and Model Types

    ERIC Educational Resources Information Center

    Fan, Xitao; Sivo, Stephen A.

    2007-01-01

    The search for cut-off criteria of fit indices for model fit evaluation (e.g., Hu & Bentler, 1999) assumes that these fit indices are sensitive to model misspecification, but not to different types of models. If fit indices were sensitive to different types of models that are misspecified to the same degree, it would be very difficult to establish…

  19. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  20. Are Physical Education Majors Models for Fitness?

    ERIC Educational Resources Information Center

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  1. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  2. Contrast Gain Control Model Fits Masking Data

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  3. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  4. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  5. Semi-exact concentric atomic density fitting: Reduced cost and increased accuracy compared to standard density fitting

    SciTech Connect

    Hollman, David S.; Schaefer, Henry F.; Valeev, Edward F.

    2014-02-14

    A local density fitting scheme is considered in which atomic orbital (AO) products are approximated using only auxiliary AOs located on one of the nuclei in that product. The possibility of variational collapse to an unphysical “attractive electron” state that can affect such density fitting [P. Merlot, T. Kjærgaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, and T. B. Pedersen, J. Comput. Chem. 34, 1486 (2013)] is alleviated by including atom-wise semidiagonal integrals exactly. Our approach leads to a significant decrease in the computational cost of density fitting for Hartree–Fock theory while still producing results with errors 2–5 times smaller than standard, nonlocal density fitting. Our method allows for large Hartree–Fock and density functional theory computations with exact exchange to be carried out efficiently on large molecules, which we demonstrate by benchmarking our method on 200 of the most widely used prescription drug molecules. Our new fitting scheme leads to smooth and artifact-free potential energy surfaces and the possibility of relatively simple analytic gradients.

  6. Semi-exact concentric atomic density fitting: reduced cost and increased accuracy compared to standard density fitting.

    PubMed

    Hollman, David S; Schaefer, Henry F; Valeev, Edward F

    2014-02-14

    A local density fitting scheme is considered in which atomic orbital (AO) products are approximated using only auxiliary AOs located on one of the nuclei in that product. The possibility of variational collapse to an unphysical "attractive electron" state that can affect such density fitting [P. Merlot, T. Kjærgaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, and T. B. Pedersen, J. Comput. Chem. 34, 1486 (2013)] is alleviated by including atom-wise semidiagonal integrals exactly. Our approach leads to a significant decrease in the computational cost of density fitting for Hartree-Fock theory while still producing results with errors 2-5 times smaller than standard, nonlocal density fitting. Our method allows for large Hartree-Fock and density functional theory computations with exact exchange to be carried out efficiently on large molecules, which we demonstrate by benchmarking our method on 200 of the most widely used prescription drug molecules. Our new fitting scheme leads to smooth and artifact-free potential energy surfaces and the possibility of relatively simple analytic gradients.

  7. Fitness

    MedlinePlus

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  8. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  9. Modeling and Fitting Exoplanet Transit Light Curves

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, G. T.

    2013-01-01

    We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the

  10. Degeneracy and discreteness in cosmological model fitting

    NASA Astrophysics Data System (ADS)

    Teng, Huan-Yu; Huang, Yuan; Zhang, Tong-Jie

    2016-03-01

    We explore the problems of degeneracy and discreteness in the standard cosmological model (ΛCDM). We use the Observational Hubble Data (OHD) and the type Ia supernovae (SNe Ia) data to study this issue. In order to describe the discreteness in fitting of data, we define a factor G to test the influence from each single data point and analyze the goodness of G. Our results indicate that a higher absolute value of G shows a better capability of distinguishing models, which means the parameters are restricted into smaller confidence intervals with a larger figure of merit evaluation. Consequently, we claim that the factor G is an effective way of model differentiation when using different models to fit the observational data.

  11. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  12. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  13. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  14. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    ERIC Educational Resources Information Center

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  15. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  16. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  17. Two algorithms for fitting constrained marginal models

    PubMed Central

    Evans, R.J.; Forcina, A.

    2013-01-01

    The two main algorithms that have been considered for fitting constrained marginal models to discrete data, one based on Lagrange multipliers and the other on a regression model, are studied in detail. It is shown that the updates produced by the two methods are identical, but that the Lagrangian method is more efficient in the case of identically distributed observations. A generalization is given of the regression algorithm for modelling the effect of exogenous individual-level covariates, a context in which the use of the Lagrangian algorithm would be infeasible for even moderate sample sizes. An extension of the method to likelihood-based estimation under L1-penalties is also considered. PMID:23794772

  18. Fitting and Modeling of AXAF Data with the ASC Fitting Application

    NASA Astrophysics Data System (ADS)

    Doe, S.; Ljungberg, M.; Siemiginowska, A.; Joye, W.

    The AXAF mission will provide X-ray data with unprecedented spatial and spectral resolution. Because of the high quality of these data, the AXAF Science Center will provide a new data analysis system--including a new fitting application. Our intent is to enable users to do fitting that is too awkward with, or beyond, the scope of existing astronomical fitting software. Our main goals are: 1) to take advantage of the full capabilities of the AXAF, we intend to provide a more sophisticated modeling capability (i.e., models that are $f(x,y,E,t)$, models to simulate the response of AXAF instruments, and models that enable ``joint-mode'' fitting, i.e., combined spatial-spectral or spectral-temporal fitting); and 2) to provide users with a wide variety of models, optimization methods, and fit statistics. In this paper, we discuss the use of an object-oriented approach in our implementation, the current features of the fitting application, and the features scheduled to be added in the coming year of development. Current features include: an interactive, command-line interface; a modeling language, which allows users to build models from arithmetic combinations of base functions; a suite of optimization and fit statistics; the ability to perform fits to multiple data sets simultaneously; and, an interface with SM and SAOtng to plot or image data, models, and/or residuals from a fit. We currently provide a modeling capability in one or two dimensions, and have recently made an effort to perform spectral fitting in a manner similar to XSPEC. We also allow users to dynamically link the fitting application to their own algorithms. Our goals for the coming year include incorporating the XSPEC model library as a subset of models available in the application, enabling ``joint-mode'' analysis and adding support for new algorithms.

  19. A Quantum Model of Atoms (the Energy Levels of Atoms).

    ERIC Educational Resources Information Center

    Rafie, Francois

    2001-01-01

    Discusses the model for all atoms which was developed on the same basis as Bohr's model for the hydrogen atom. Calculates the radii and the energies of the orbits. Demonstrates how the model obeys the de Broglie's hypothesis that the moving electron exhibits both wave and particle properties. (Author/ASK)

  20. The best-fit universe. [cosmological models

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  1. "Electronium": A Quantum Atomic Teaching Model.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Outlines an alternative atomic model to the probability model, the descriptive quantum atomic model Electronium. Discusses the way in which it is intended to support students in learning quantum-mechanical concepts. (Author/MM)

  2. Subshell fitting of relativistic atomic core electron densities for use in QTAIM analyses of ECP-based wave functions.

    PubMed

    Keith, Todd A; Frisch, Michael J

    2011-11-17

    Scalar-relativistic, all-electron density functional theory (DFT) calculations were done for free, neutral atoms of all elements of the periodic table using the universal Gaussian basis set. Each core, closed-subshell contribution to a total atomic electron density distribution was separately fitted to a spherical electron density function: a linear combination of s-type Gaussian functions. The resulting core subshell electron densities are useful for systematically and compactly approximating total core electron densities of atoms in molecules, for any atomic core defined in terms of closed subshells. When used to augment the electron density from a wave function based on a calculation using effective core potentials (ECPs) in the Hamiltonian, the atomic core electron densities are sufficient to restore the otherwise-absent electron density maxima at the nuclear positions and eliminate spurious critical points in the neighborhood of the atom, thus enabling quantum theory of atoms in molecules (QTAIM) analyses to be done in the neighborhoods of atoms for which ECPs were used. Comparison of results from QTAIM analyses with all-electron, relativistic and nonrelativistic molecular wave functions validates the use of the atomic core electron densities for augmenting electron densities from ECP-based wave functions. For an atom in a molecule for which a small-core or medium-core ECPs is used, simply representing the core using a simplistic, tightly localized electron density function is actually sufficient to obtain a correct electron density topology and perform QTAIM analyses to obtain at least semiquantitatively meaningful results, but this is often not true when a large-core ECP is used. Comparison of QTAIM results from augmenting ECP-based molecular wave functions with the realistic atomic core electron densities presented here versus augmenting with the limiting case of tight core densities may be useful for diagnosing the reliability of large-core ECP models in

  3. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  4. Modeling of atom-diatom scattering. Technical report

    SciTech Connect

    Sindoni, J.M.

    1992-05-30

    This report entails the work performed on modeling atom-diatom scattering processes utilizing the Impulse Approach (IA). Results of the model, obtained with a computer code, have proven to be in remarkable agreement with laboratory measurements for several atom-diatom scattering systems. Two scattering systems, in particular, that were successfully modeled and compared to measurements were Ar-KBr and Ar-CsF. The IA model provided an explanation for the rapid deactivation evident in the Ar-KBr system. Experimental results in the Ar-CsF experiment that could not be explained by conventional models were also successfully modeled using the IA. Results fit the experimental observations.

  5. A liquid drop model for embedded atom method cluster energies

    NASA Technical Reports Server (NTRS)

    Finley, C. W.; Abel, P. B.; Ferrante, J.

    1996-01-01

    Minimum energy configurations for homonuclear clusters containing from two to twenty-two atoms of six metals, Ag, Au, Cu, Ni, Pd, and Pt have been calculated using the Embedded Atom Method (EAM). The average energy per atom as a function of cluster size has been fit to a liquid drop model, giving estimates of the surface and curvature energies. The liquid drop model gives a good representation of the relationship between average energy and cluster size. As a test the resulting surface energies are compared to EAM surface energy calculations for various low-index crystal faces with reasonable agreement.

  6. Effectiveness of the Sport Education Fitness Model on Fitness Levels, Knowledge, and Physical Activity

    ERIC Educational Resources Information Center

    Pritchard, Tony; Hansen, Andrew; Scarboro, Shot; Melnic, Irina

    2015-01-01

    The purpose of this study was to investigate changes in fitness levels, content knowledge, physical activity levels, and participants' perceptions following the implementation of the sport education fitness model (SEFM) at a high school. Thirty-two high school students participated in 20 lessons using the SEFM. Aerobic capacity, muscular…

  7. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher's Geometric Model?

    PubMed

    Blanquart, François; Bataillon, Thomas

    2016-06-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes-including the mean and SD of selection and epistasis coefficients-it was often unable to explain the full structure of fitness landscapes.

  8. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Obreschkow, D.

    2015-09-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.

  9. Can atom-surface potential measurements test atomic structure models?

    PubMed

    Lonij, Vincent P A; Klauss, Catherine E; Holmgren, William F; Cronin, Alexander D

    2011-06-30

    van der Waals (vdW) atom-surface potentials can be excellent benchmarks for atomic structure calculations. This is especially true if measurements are made with two different types of atoms interacting with the same surface sample. Here we show theoretically how ratios of vdW potential strengths (e.g., C₃(K)/C₃(Na)) depend sensitively on the properties of each atom, yet these ratios are relatively insensitive to properties of the surface. We discuss how C₃ ratios depend on atomic core electrons by using a two-oscillator model to represent the contribution from atomic valence electrons and core electrons separately. We explain why certain pairs of atoms are preferable to study for future experimental tests of atomic structure calculations. A well chosen pair of atoms (e.g., K and Na) will have a C₃ ratio that is insensitive to the permittivity of the surface, whereas a poorly chosen pair (e.g., K and He) will have a ratio of C₃ values that depends more strongly on the permittivity of the surface.

  10. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems.

    PubMed

    Li, Jun; Jiang, Bin; Guo, Hua

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  11. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    SciTech Connect

    Li, Jun; Jiang, Bin; Guo, Hua

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  12. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  13. Goodness of Model-Data Fit and Invariant Measurement

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Perkins, Aminah

    2013-01-01

    In this commentary, Englehard and Perkins remark that Maydeu-Olivares has presented a framework for evaluating the goodness of model-data fit for item response theory (IRT) models and correctly points out that overall goodness-of-fit evaluations of IRT models and data are not generally explored within most applications in educational and…

  14. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  15. ATOMIC AND MOLECULAR PHYSICS: Four-parameter analytical local model potential for atoms

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Sun, Jiu-Xun; Tian, Rong-Gang; Yang, Wei

    2009-10-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of Xa method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function.

  16. HDFITS: Porting the FITS data model to HDF5

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  17. Consequences of Fitting Nonidentified Latent Class Models

    ERIC Educational Resources Information Center

    Abar, Beau; Loken, Eric

    2012-01-01

    Latent class models are becoming more popular in behavioral research. When models with a large number of latent classes relative to the number of manifest indicators are estimated, researchers must consider the possibility that the model is not identified. It is not enough to determine that the model has positive degrees of freedom. A well-known…

  18. Fitting Value-Added Models in R

    ERIC Educational Resources Information Center

    Doran, Harold C.; Lockwood, J. R.

    2006-01-01

    Value-added models of student achievement have received widespread attention in light of the current test-based accountability movement. These models use longitudinal growth modeling techniques to identify effective schools or teachers based upon the results of changes in student achievement test scores. Given their increasing popularity, this…

  19. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  20. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  1. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  2. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  3. Fitting ARMA Time Series by Structural Equation Models.

    ERIC Educational Resources Information Center

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  4. A New Tradition To Fit the Model.

    ERIC Educational Resources Information Center

    Darnell, D. Roe; Rosenthal, Donna McCrohan

    2001-01-01

    Discusses Cerro Coso Community College in Ridgecrest (California), where 80-85 of all local jobs are with one employer, the China Lake Naval Air Weapons Station (NAWS). States that massive layoffs at NAWS inspired creative ways of rethinking the community college model at Cerro Coso, such as creating the nation's first computer graphics imagery…

  5. Nagaoka's atomic model and hyperfine interactions.

    PubMed

    Inamura, Takashi T

    2016-01-01

    The prevailing view of Nagaoka's "Saturnian" atom is so misleading that today many people have an erroneous picture of Nagaoka's vision. They believe it to be a system involving a 'giant core' with electrons circulating just outside. Actually, though, in view of the Coulomb potential related to the atomic nucleus, Nagaoka's model is exactly the same as Rutherford's. This is true of the Bohr atom, too. To give proper credit, Nagaoka should be remembered together with Rutherford and Bohr in the history of the atomic model. It is also pointed out that Nagaoka was a pioneer of understanding hyperfine interactions in order to study nuclear structure.

  6. The Hydrogen Atom: The Rutherford Model

    NASA Astrophysics Data System (ADS)

    Tilton, Homer Benjamin

    1996-06-01

    Early this century Ernest Rutherford established the nuclear model of the hydrogen atom, presently taught as representing the best visual model after modification by Niels Bohr and Arnold Sommerfeld. It replaced the so-called "plum pudding" model of J. J. Thomson which held sway previously. While the Rutherford model represented a large step forward in our understanding of the hydrogen atom, questions remained, and still do.

  7. Transit Model Fitting in the Kepler Science Operations Center Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, C. J.; Jenkins, J. M.; Quintana, E. V.; Rowe, J. F.; Seader, S. E.; Tenenbaum, P.; Twicken, J. D.

    2012-05-01

    We describe the algorithm and performance of the transit model fitting of the Kepler Science Operations Center (SOC) Pipeline. Light curves of long cadence targets are subjected to the Transiting Planet Search (TPS) component of the Kepler SOC Pipeline. Those targets for which a Threshold Crossing Event (TCE) is generated in the transit search are subsequently processed in the Data Validation (DV) component. The light curves may span one or more Kepler observing quarters, and data may not be available for any given target in all quarters. Transit model parameters are fitted in DV to transit-like signatures in the light curves of target stars with TCEs. The fitted parameters are used to generate a predicted light curve based on the transit model. The residual flux time series of the target star, with the predicted light curve removed, is fed back to TPS to search for additional TCEs. The iterative process of transit model fitting and transiting planet search continues until no TCE is generated from the residual flux time series or a planet candidate limit is reached. The transit model includes five parameters to be fitted: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. The initial values of the fit parameters are determined from the TCE values provided by TPS. A limb darkening model is included in the transit model to generate the predicted light curve. The transit model fitting results are used in the diagnostic tests in DV, such as the centroid motion test, eclipsing binary discrimination tests, etc., which helps to validate planet candidates and identify false positive detections. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  8. Automatic fitting of spiking neuron models to electrophysiological recordings.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  9. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function.

    PubMed

    Prill, Dragica; Juhás, Pavol; Billinge, Simon J L; Schmidt, Martin U

    2016-01-01

    A method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data. PMID:26697868

  10. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    SciTech Connect

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; Schmidt, Martin U.

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  11. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    DOE PAGESBeta

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; Schmidt, Martin U.

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may bemore » used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.« less

  12. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function.

    PubMed

    Prill, Dragica; Juhás, Pavol; Billinge, Simon J L; Schmidt, Martin U

    2016-01-01

    A method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  13. Goodness of Fit Criteria in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    Several goodness of fit (GOF) criteria have been developed to assist the researcher in interpreting structural equation models. However, the determination of GOF for structural equation models is not as straightforward as that for other statistical approaches in multivariate procedures. The four GOF criteria used across the commonly used…

  14. Twitter classification model: the ABC of two million fitness tweets.

    PubMed

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment. PMID:24073182

  15. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.

  16. A Model-Fitting Approach to Characterizing Polymer Decomposition Kinetics

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-07-20

    The use of isoconversional, sometimes called model-free, kinetic analysis methods have recently gained favor in the thermal analysis community. Although these methods are very useful and instructive, the conclusion that model fitting is a poor approach is largely due to improper use of the model fitting approach, such as fitting each heating rate separately. The current paper shows the ability of model fitting to correlate reaction data over very wide time-temperature regimes, including simultaneous fitting of isothermal and constant heating rate data. Recently published data on cellulose pyrolysis by Capart et al. (TCA, 2004) with a combination of an autocatalytic primary reaction and an nth-order char pyrolysis reaction is given as one example. Fits for thermal decomposition of Estane, Viton-A, and Kel-F over very wide ranges of heating rates is also presented. The Kel-F required two parallel reactions--one describing a small, early decomposition process, and a second autocatalytic reaction describing the bulk of pyrolysis. Viton-A and Estane also required two parallel reactions for primary pyrolysis, with the first Viton-A reaction also being a minor, early process. In addition, the yield of residue from these two polymers depends on the heating rate. This is an example of a competitive reaction between volatilization and char formation, which violates the basic tenet of the isoconversional approach and is an example of why it has limitations. Although more complicated models have been used in the literature for this type of process, we described our data well with a simple addition to the standard model in which the char yield is a function of the logarithm of the heating rate.

  17. Time-domain fitting of battery electrochemical impedance models

    NASA Astrophysics Data System (ADS)

    Alavi, S. M. M.; Birkl, C. R.; Howey, D. A.

    2015-08-01

    Electrochemical impedance spectroscopy (EIS) is an effective technique for diagnosing the behaviour of electrochemical devices such as batteries and fuel cells, usually by fitting data to an equivalent circuit model (ECM). The common approach in the laboratory is to measure the impedance spectrum of a cell in the frequency domain using a single sine sweep signal, then fit the ECM parameters in the frequency domain. This paper focuses instead on estimation of the ECM parameters directly from time-domain data. This may be advantageous for parameter estimation in practical applications such as automotive systems including battery-powered vehicles, where the data may be heavily corrupted by noise. The proposed methodology is based on the simplified refined instrumental variable for continuous-time fractional systems method ('srivcf'), provided by the Crone toolbox [1,2], combined with gradient-based optimisation to estimate the order of the fractional term in the ECM. The approach was tested first on synthetic data and then on real data measured from a 26650 lithium-ion iron phosphate cell with low-cost equipment. The resulting Nyquist plots from the time-domain fitted models match the impedance spectrum closely (much more accurately than when a Randles model is assumed), and the fitted parameters as separately determined through a laboratory potentiostat with frequency domain fitting match to within 13%.

  18. Learning local objective functions for robust face model fitting.

    PubMed

    Wimmer, Matthias; Stulp, Freek; Pietzsch, Sylvia; Radig, Bernd

    2008-08-01

    Model-based techniques have proven to be successful in interpreting the large amount of information contained in images. Associated fitting algorithms search for the global optimum of an objective function, which should correspond to the best model fit in a given image. Although fitting algorithms have been the subject of intensive research and evaluation, the objective function is usually designed ad hoc, based on implicit and domain-dependent knowledge. In this article, we address the root of the problem by learning more robust objective functions. First, we formulate a set of desirable properties for objective functions and give a concrete example function that has these properties. Then, we propose a novel approach that learns an objective function from training data generated by manual image annotations and this ideal objective function. In this approach, critical decisions such as feature selection are automated, and the remaining manual steps hardly require domain-dependent knowledge. Furthermore, an extensive empirical evaluation demonstrates that the obtained objective functions yield more robustness. Learned objective functions enable fitting algorithms to determine the best model fit more accurately than with designed objective functions. PMID:18566491

  19. Modeling Atom Probe Tomography: A review.

    PubMed

    Vurpillot, F; Oberdorfer, C

    2015-12-01

    Improving both the precision and the accuracy of Atom Probe Tomography reconstruction requires a correct understanding of the imaging process. In this aim, numerical modeling approaches have been developed for 15 years. The injected ingredients of these modeling tools are related to the basic physic of the field evaporation mechanism. The interplay between the sample nature and structure of the analyzed sample and the reconstructed image artefacts have pushed to gradually improve and make the model more and more sophisticated. This paper reviews the evolution of the modeling approach in Atom Probe Tomography and presents some future potential directions in order to improve the method.

  20. On the accuracy and fitting of transversely isotropic material models.

    PubMed

    Feng, Yuan; Okamoto, Ruth J; Genin, Guy M; Bayly, Philip V

    2016-08-01

    Fiber reinforced structures are central to the form and function of biological tissues. Hyperelastic, transversely isotropic material models are used widely in the modeling and simulation of such tissues. Many of the most widely used models involve strain energy functions that include one or both pseudo-invariants (I4 or I5) to incorporate energy stored in the fibers. In a previous study we showed that both of these invariants must be included in the strain energy function if the material model is to reduce correctly to the well-known framework of transversely isotropic linear elasticity in the limit of small deformations. Even with such a model, fitting of parameters is a challenge. Here, by evaluating the relative roles of I4 and I5 in the responses to simple loadings, we identify loading scenarios in which previous models accounting for only one of these invariants can be expected to provide accurate estimation of material response, and identify mechanical tests that have special utility for fitting of transversely isotropic constitutive models. Results provide guidance for fitting of transversely isotropic constitutive models and for interpretation of the predictions of these models.

  1. Multidimensional Rasch Model Information-Based Fit Index Accuracy

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Wolfe, Edward W.

    2013-01-01

    Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…

  2. Fuzzy Partition Models for Fitting a Set of Partitions.

    ERIC Educational Resources Information Center

    Gordon, A. D.; Vichi, M.

    2001-01-01

    Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)

  3. The Gold Medal Fitness Program: A Model for Teacher Change

    ERIC Educational Resources Information Center

    Wright, Jan; Konza, Deslea; Hearne, Doug; Okely, Tony

    2008-01-01

    Background: Following the 2000 Sydney Olympics, the NSW Premier, Mr Bob Carr, launched a school-based initiative in NSW government primary schools called the "Gold Medal Fitness Program" to encourage children to be fitter and more active. The Program was introduced into schools through a model of professional development, "Quality Teaching and…

  4. Statistical assessment of model fit for synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.; O'Sullivan, Joseph A.

    2001-08-01

    Parametric approaches to problems of inference from observed data often rely on assumed probabilistic models for the data which may be based on knowledge of the physics of the data acquisition. Given a rich enough collection of sample data, the validity of those assumed models can be assessed in a statistical hypothesis testing framework using any of a number of goodness-of-fit tests developed over the last hundred years for this purpose. Such assessments can be used both to compare alternate models for observed data and to help determine the conditions under which a given model breaks down. We apply three such methods, the (chi) 2 test of Karl Pearson, Kolmogorov's goodness-of-fit test, and the D'Agostino-Pearson test for normality, to quantify how well the data fit various models for synthetic aperture radar (SAR) images. The results of these tests are used to compare a conditionally Gaussian model for complex-valued SAR pixel values, a conditionally log-normal model for SAR pixel magnitudes, and a conditionally normal model for SAR pixel quarter-power values. Sample data for these tests are drawn from the publicly released MSTAR dataset.

  5. Raindrop size distribution: Fitting performance of common theoretical models

    NASA Astrophysics Data System (ADS)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  6. Derivation of Distributed Models of Atomic Polarizability for Molecular Simulations.

    PubMed

    Soteras, Ignacio; Curutchet, Carles; Bidon-Chanal, Axel; Dehez, François; Ángyán, János G; Orozco, Modesto; Chipot, Christophe; Luque, F Javier

    2007-11-01

    The main thrust of this investigation is the development of models of distributed atomic polarizabilities for the treatment of induction effects in molecular mechanics simulations. The models are obtained within the framework of the induced dipole theory by fitting the induction energies computed via a fast but accurate MP2/Sadlej-adjusted perturbational approach in a grid of points surrounding the molecule. Particular care is paid in the examination of the atomic quantities obtained from models of implicitly and explicitly interacting polarizabilities. Appropriateness and accuracy of the distributed models are assessed by comparing the molecular polarizabilities recovered from the models and those obtained experimentally and from MP2/Sadlej calculations. The behavior of the models is further explored by computing the polarization energy for aromatic compounds in the context of cation-π interactions and for selected neutral compounds in a TIP3P aqueous environment. The present results suggest that the computational strategy described here constitutes a very effective tool for the development of distributed models of atomic polarizabilities and can be used in the generation of new polarizable force fields.

  7. Broadband distortion modeling in Lyman-α forest BAO fitting

    NASA Astrophysics Data System (ADS)

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; Arinyo-i-Prats, Andreu; Busca, Nicolás G.; Miralda-Escudé, Jordi; Slosar, Anže; Font-Ribera, Andreu; Margala, Daniel; Schneider, Donald P.; Vazquez, Jose A.

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift zsimeq 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter bF and the redshift-space distortion parameter βF for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination bF(1+βF) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39+0.11 +0.24 +0.38-0.10 -0.19 -0.28 and bF(1+βF)=-0.374+0.007 +0.013 +0.020-0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.

  8. Students' Mental Models of Atomic Spectra

    ERIC Educational Resources Information Center

    Körhasan, Nilüfer Didis; Wang, Lu

    2016-01-01

    Mental modeling, which is a theory about knowledge organization, has been recently studied by science educators to examine students' understanding of scientific concepts. This qualitative study investigates undergraduate students' mental models of atomic spectra. Nine second-year physics students, who have already taken the basic chemistry and…

  9. Assessing the fit of site-occupancy models

    USGS Publications Warehouse

    MacKenzie, D.I.; Bailey, L.L.

    2004-01-01

    Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.

  10. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  11. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  12. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    NASA Technical Reports Server (NTRS)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  13. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    ERIC Educational Resources Information Center

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  14. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  15. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  16. Auxiliary Basis Sets for Density Fitting in Explicitly Correlated Calculations: The Atoms H-Ar.

    PubMed

    Kritikou, Stella; Hill, J Grant

    2015-11-10

    Auxiliary basis sets specifically matched to the correlation consistent cc-pVnZ-F12 and cc-pCVnZ-F12 orbital basis sets for the elements H-Ar have been optimized at the density-fitted second-order Møller-Plesset perturbation theory level of theory for use in explicitly correlated (F12) methods, which utilize density fitting for the evaluation of two-electron integrals. Calculations of the correlation energy for a test set of small to medium sized molecules indicate that the density fitting error when using these auxiliary sets is 2 to 3 orders of magnitude smaller than the F12 orbital basis set incompleteness error. The error introduced by the use of these fitting sets within the resolution-of-the-identity approximation of the many-electron integrals arising in F12 theory has also been assessed and is demonstrated to be negligible and well-controlled. General guidelines are proposed for the optimization of density fitting auxiliary basis sets for use with F12 methods for other elements.

  17. Equilibrium Distribution of Mutators in the Single Fitness Peak Model

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel; Deeds, Eric J.; Shakhnovich, Eugene I.

    2003-09-01

    This Letter develops an analytically tractable model for determining the equilibrium distribution of mismatch repair deficient strains in unicellular populations. The approach is based on the single fitness peak model, which has been used in Eigen’s quasispecies equations in order to understand various aspects of evolutionary dynamics. As with the quasispecies model, our model for mutator-nonmutator equilibrium undergoes a phase transition in the limit of infinite sequence length. This “repair catas­trophe” occurs at a critical repair error probability of ɛr=Lvia/L, where Lvia denotes the length of the genome controlling viability, while L denotes the overall length of the genome. The repair catastrophe therefore occurs when the repair error probability exceeds the fraction of deleterious mutations. Our model also gives a quantitative estimate for the equilibrium fraction of mutators in Escherichia coli.

  18. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  19. Testing goodness of fit of parametric models for censored data.

    PubMed

    Nysen, Ruth; Aerts, Marc; Faes, Christel

    2012-09-20

    We propose and study a goodness-of-fit test for left-censored, right-censored, and interval-censored data assuming random censorship. Main motivation comes from dietary exposure assessment in chemical risk assessment, where the determination of an appropriate distribution for concentration data is of major importance. We base the new goodness-of-fit test procedure proposed in this paper on the order selection test. As part of the testing procedure, we extend the null model to a series of nested alternative models for censored data. Then, we use a modified AIC model selection to select the best model to describe the data. If a model with one or more extra parameters is selected, then we reject the null hypothesis. As an alternative to the use of the asymptotic null distribution of the test statistic, we define a bootstrap-based procedure. We illustrate the applicability of the test procedure on data of cadmium concentrations and on data from the Signal Tandmobiel study and demonstrate its performance characteristics through simulation studies. PMID:22714389

  20. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    PubMed

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. PMID:25680684

  1. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    PubMed

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns.

  2. A Comprehensive X-Ray Absorption Model for Atomic Oxygen

    NASA Technical Reports Server (NTRS)

    Gorczyca, T. W.; Bautista, M. A.; Hasoglu, M. F.; Garcia, J.; Gatuzz, E.; Kaastra, J. S.; Kallman, T. R.; Manson, S. T.; Mendoza, C.; Raassen, A. J. J.; de Vries, C. P.; Zatsarinny, O.

    2013-01-01

    An analytical formula is developed to accurately represent the photoabsorption cross section of atomic Oxygen for all energies of interest in X-ray spectral modeling. In the vicinity of the K edge, a Rydberg series expression is used to fit R-matrix results, including important orbital relaxation effects, that accurately predict the absorption oscillator strengths below threshold and merge consistently and continuously to the above-threshold cross section. Further, minor adjustments are made to the threshold energies in order to reliably align the atomic Rydberg resonances after consideration of both experimental and observed line positions. At energies far below or above the K-edge region, the formulation is based on both outer- and inner-shell direct photoionization, including significant shake-up and shake-off processes that result in photoionization-excitation and double-photoionization contributions to the total cross section. The ultimate purpose for developing a definitive model for oxygen absorption is to resolve standing discrepancies between the astronomically observed and laboratory-measured line positions, and between the inferred atomic and molecular oxygen abundances in the interstellar medium from XSTAR and SPEX spectral models.

  3. Atomic Data Applications for Supernova Modeling

    NASA Astrophysics Data System (ADS)

    Fontes, Christopher J.

    2013-06-01

    The modeling of supernovae (SNe) incorporates a variety of disciplines, including hydrodynamics, radiation transport, nuclear physics and atomic physics. These efforts require numerical simulation of the final stages of a star's life, the supernova explosion phase, and the radiation that is subsequently emitted by the supernova remnant, which can occur over a time span of tens of thousands of years. While there are several different types of SNe, they all emit radiation in some form. The measurement and interpretation of these spectra provide important information about the structure of the exploding star and the supernova engine. In this talk, the role of atomic data is highlighted as iit pertains to the modeling of supernova spectra. Recent applications [1,2] involve the Los Alamos OPLIB opacity database, which has been used to provide atomic opacities for modeling supernova plasmas under local thermodynamic equilibrium (LTE) conditions. Ongoing work includes the application of atomic data generated by the Los Alamos suite of atomic physics codes under more complicated, non-LTE conditions [3]. As a specific, recent example, a portion of the x-ray spectrum produced by Tycho's supernova remnant (SN 1572) will be discussed [4]. [1] C.L. Fryer et al, Astrophys. J. 707, 193 (2009). [2] C.L. Fryer et al, Astrophys. J. 725, 296 (2009). [3] C.J. Fontes et al, Conference Proceedings for ICPEAC XXVII, J. of Phys: Conf. Series 388, 012022 (2012). [4] K.A. Eriksen et al, Presentation at the 2012 AAS Meeting (Austin, TX). (This work was performed under the auspices of the U.S. Department of Energy by Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396.)

  4. Rapid world modeling: Fitting range data to geometric primitives

    SciTech Connect

    Feddema, J.; Little, C.

    1996-12-31

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE`s waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data.

  5. Partial Atomic Charges and Screened Charge Models of the Electrostatic Potential.

    PubMed

    Wang, Bo; Truhlar, Donald G

    2012-06-12

    We propose a new screened charge method for calculating partial atomic charges in molecules by electrostatic potential (ESP) fitting. The model, called full density screening (FDS), is used to approximate the screening effect of full charge densities of atoms in molecules. The results are compared to the conventional ESP fitting method based on point charges and to our previously proposed outer density screening (ODS) method, in which the parameters are reoptimized for the present purpose. In ODS, the charge density of an atom is represented by the sum of a point charge and a smeared negative charge distributed in a Slater-type orbital (STO). In FDS, the charge density of an atom is taken to be the sum of the charge density of the neutral atom and a partial atomic charge (of either sign) distributed in an STO. The ζ values of the STOs used in these two models are optimized in the present study to best reproduce the electrostatic potentials. The quality of the fit to the electrostatics is improved in the screened charge methods, especially for the regions that are within one van der Waals radius of the centers of atoms. It is also found that the charges derived by fitting electrostatic potentials with screened charges are less sensitive to the positions of the fitting points than are those derived with conventional electrostatic fitting. Moreover, we found that the electrostatic-potential-fitted (ESP) charges from the screened charge methods are similar to those from the point-charge method except for molecules containing the methyl group, where we have explored the use of restraints on nonpolar H atoms. We recommend the FDS model if the only goal is ESP fitting to obtain partial atomic charges or a fit to the ESP field. However, the ODS model is more accurate for electronic embedding in combined quantum mechanical and molecular mechanical (QM/MM) modeling and is more accurate than point-charge models for ESP fitting, and it is recommended for applications

  6. Effect of the Number of Variables on Measures of Fit in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Kenny, David A.; McCoach, D. Betsy

    2003-01-01

    Used three approaches to understand the effect of the number of variables in the model on model fit in structural equation modeling through computer simulation. Developed a simple formula for the theoretical value of the comparative fit index. (SLD)

  7. Atomic forces for geometry-dependent point multipole and gaussian multipole models.

    PubMed

    Elking, Dennis M; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G

    2010-11-30

    In standard treatments of atomic multipole models, interaction energies, total molecular forces, and total molecular torques are given for multipolar interactions between rigid molecules. However, if the molecules are assumed to be flexible, two additional multipolar atomic forces arise because of (1) the transfer of torque between neighboring atoms and (2) the dependence of multipole moment on internal geometry (bond lengths, bond angles, etc.) for geometry-dependent multipole models. In this study, atomic force expressions for geometry-dependent multipoles are presented for use in simulations of flexible molecules. The atomic forces are derived by first proposing a new general expression for Wigner function derivatives partial derivative D(m'm)(l)/partial derivative Omega. The force equations can be applied to electrostatic models based on atomic point multipoles or gaussian multipole charge density. Hydrogen-bonded dimers are used to test the intermolecular electrostatic energies and atomic forces calculated by geometry-dependent multipoles fit to the ab initio electrostatic potential. The electrostatic energies and forces are compared with their reference ab initio values. It is shown that both static and geometry-dependent multipole models are able to reproduce total molecular forces and torques with respect to ab initio, whereas geometry-dependent multipoles are needed to reproduce ab initio atomic forces. The expressions for atomic force can be used in simulations of flexible molecules with atomic multipoles. In addition, the results presented in this work should lead to further development of next generation force fields composed of geometry-dependent multipole models.

  8. A Commentary on the Relationship between Model Fit and Saturated Path Models in Structural Equation Modeling Applications

    ERIC Educational Resources Information Center

    Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi

    2013-01-01

    The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…

  9. Assessing Model Data Fit of Unidimensional Item Response Theory Models in Simulated Data

    ERIC Educational Resources Information Center

    Kose, Ibrahim Alper

    2014-01-01

    The purpose of this paper is to give an example of how to assess the model-data fit of unidimensional IRT models in simulated data. Also, the present research aims to explain the importance of fit and the consequences of misfit by using simulated data sets. Responses of 1000 examinees to a dichotomously scoring 20 item test were simulated with 25…

  10. Atomic model of supersymmetric Hubbard operators

    NASA Astrophysics Data System (ADS)

    Hopkinson, J.; Coleman, P.

    2003-02-01

    We apply the recently proposed supersymmetric Hubbard operators [P. Coleman, C. Pépin, and J. Hopkinson, Phys. Rev. B 63, 140411(R) (2001)] to an atomic model. In the limiting case of free spins, we derive exact results for the entropy which are compared with a mean-field + Gaussian corrections description. We show how these results can be extended to the case of charge fluctuations and calculate exact results for the partition function, free energy, and heat capacity of an atomic model for some simple examples. Wave-functions of possible states are listed. We compare the accuracy of large N expansions of the susy spin operators [P. Coleman, C. Pépin, and A. M. Tsvelik, Phys. Rev. B 62, 3852 (2000); Nucl. Phys. B 586, 641 (2000)] with those obtained using “Schwinger bosons” and “Abrikosov pseudofermions.” For the atomic model, we compare results of slave boson, slave fermion, and susy Hubbard operator approximations in the physically interesting but uncontrolled limiting case of N→2. For a mixed representation of spins, we estimate the accuracy of large N expansions of the atomic model. In the single box limit, we find that the lowest-energy susy saddle point reduces to simply either slave bosons or slave fermions, while for higher boxes this is not the case. The highest energy saddle point solution has the interesting feature that it admits a small region of a mixed representation, which bears a superficial resemblance to that observed experimentally close to an antiferromagnetic quantum critical point.

  11. Atomic Data For Core And Edge Modeling

    SciTech Connect

    O'Mullane, M. G.; Foster, A. R.; Whiteford, A. D.; Summers, H. P.; Loch, S. D.; Lauro-Taroni, L.

    2009-09-10

    Future magnetic fusion energy devices, will have both very high Z (tungsten) and low Z (beryllium) plasma facing components, are setting the agenda for current atomic data needs. Data for the light species are in good shape but the heavy species present some challenges. We outline an approach for systematic heavy element data production for fusion applications in addition to techniques for handling the large amount of data in modeling codes efficiently.

  12. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  13. HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting

    NASA Astrophysics Data System (ADS)

    Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni

    2016-06-01

    HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.

  14. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%–3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  15. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  16. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  17. A Green's function quantum average atom model

    SciTech Connect

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  18. Atomic Layer Deposition - Process Models and Metrologies

    SciTech Connect

    Burgess, D.R. Jr.; Maslar, J.E.; Hurst, W.S.; Moore, E.F.; Kimes, W.A.; Fink, R.R.; Nguyen, N.V.

    2005-09-09

    We report on the status of a combined experimental and modeling study for atomic layer deposition (ALD) of HfO2 and Al2O3. Hafnium oxide films were deposited from tetrakis(dimethylamino)hafnium and water. Aluminum oxide films from trimethyl aluminum and water are being studied through simulations. In this work, both in situ metrologies and process models are being developed. Optically-accessible ALD reactors have been constructed for in situ, high-sensitivity Raman and infrared absorption spectroscopic measurements to monitor gas phase and surface species. A numerical model using computational fluid dynamics codes has been developed to simulate the gas flow and temperature profiles in the experimental reactor. Detailed chemical kinetic models are being developed with assistance from quantum chemical calculations to explore reaction pathways and energetics. This chemistry is then incorporated into the overall reactor models.

  19. The FIT Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros

    2010-09-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  20. Atom-Role-Based Access Control Model

    NASA Astrophysics Data System (ADS)

    Cai, Weihong; Huang, Richeng; Hou, Xiaoli; Wei, Gang; Xiao, Shui; Chen, Yindong

    Role-based access control (RBAC) model has been widely recognized as an efficient access control model and becomes a hot research topic of information security at present. However, in the large-scale enterprise application environments, the traditional RBAC model based on the role hierarchy has the following deficiencies: Firstly, it is unable to reflect the role relationships in complicated cases effectively, which does not accord with practical applications. Secondly, the senior role unconditionally inherits all permissions of the junior role, thus if a user is under the supervisor role, he may accumulate all permissions, and this easily causes the abuse of permission and violates the least privilege principle, which is one of the main security principles. To deal with these problems, we, after analyzing permission types and role relationships, proposed the concept of atom role and built an atom-role-based access control model, called ATRBAC, by dividing the permission set of each regular role based on inheritance path relationships. Through the application-specific analysis, this model can well meet the access control requirements.

  1. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  2. Fitting of Parametric Building Models to Oblique Aerial Images

    NASA Astrophysics Data System (ADS)

    Panday, U. S.; Gerke, M.

    2011-09-01

    In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of - 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for completeness of

  3. Making It Visual: Creating a Model of the Atom

    ERIC Educational Resources Information Center

    Pringle, Rose M.

    2004-01-01

    This article describes a lesson in which students construct Bohr's planetary model of the atom. Niels Bohr's atomic model provides a framework for discussing with middle and high school students the historical development of our understanding of the structure of the atom. The model constructed in this activity will enable students to visualize the…

  4. A comprehensive X-ray absorption model for atomic oxygen

    SciTech Connect

    Gorczyca, T. W.; Bautista, M. A.; Mendoza, C.; Hasoglu, M. F.; García, J.; Gatuzz, E.; Kaastra, J. S.; Raassen, A. J. J.; De Vries, C. P.; Kallman, T. R.; Manson, S. T.; Zatsarinny, O.

    2013-12-10

    An analytical formula is developed to accurately represent the photoabsorption cross section of O I for all energies of interest in X-ray spectral modeling. In the vicinity of the K edge, a Rydberg series expression is used to fit R-matrix results, including important orbital relaxation effects, that accurately predict the absorption oscillator strengths below threshold and merge consistently and continuously to the above-threshold cross section. Further, minor adjustments are made to the threshold energies in order to reliably align the atomic Rydberg resonances after consideration of both experimental and observed line positions. At energies far below or above the K-edge region, the formulation is based on both outer- and inner-shell direct photoionization, including significant shake-up and shake-off processes that result in photoionization-excitation and double-photoionization contributions to the total cross section. The ultimate purpose for developing a definitive model for oxygen absorption is to resolve standing discrepancies between the astronomically observed and laboratory-measured line positions, and between the inferred atomic and molecular oxygen abundances in the interstellar medium from XSTAR and SPEX spectral models.

  5. Big Atoms for Small Children: Building Atomic Models from Common Materials to Better Visualize and Conceptualize Atomic Structure

    ERIC Educational Resources Information Center

    Cipolla, Laura; Ferrari, Lia A.

    2016-01-01

    A hands-on approach to introduce the chemical elements and the atomic structure to elementary/middle school students is described. The proposed classroom activity presents Bohr models of atoms using common and inexpensive materials, such as nested plastic balls, colored modeling clay, and small-sized pasta (or small plastic beads).

  6. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  7. YUP.SCX: coaxing atomic models into medium resolution electron density maps.

    PubMed

    Tan, Robert K-Z; Devkota, Batsal; Harvey, Stephen C

    2008-08-01

    The structures of large macromolecular complexes in different functional states can be determined by cryo-electron microscopy, which yields electron density maps of low to intermediate resolutions. The maps can be combined with high-resolution atomic structures of components of the complex, to produce a model for the complex that is more accurate than the formal resolution of the map. To this end, methods have been developed to dock atomic models into density maps rigidly or flexibly, and to refine a docked model so as to optimize the fit of the atomic model into the map. We have developed a new refinement method called YUP.SCX. The electron density map is converted into a component of the potential energy function to which terms for stereochemical restraints and volume exclusion are added. The potential energy function is then minimized (using simulated annealing) to yield a stereochemically-restrained atomic structure that fits into the electron density map optimally. We used this procedure to construct an atomic model of the 70S ribosome in the pre-accommodation state. Although some atoms are displaced by as much as 33A, they divide themselves into nearly rigid fragments along natural boundaries with smooth transitions between the fragments.

  8. Simulations of Statistical Model Fits to RHIC Data

    NASA Astrophysics Data System (ADS)

    Llope, W. J.

    2013-04-01

    The application of statistical model fits to experimentally measured particle multiplicity ratios allows inferences of the average values of temperatures, T, baryochemical potentials, μB, and other quantities at chemical freeze-out. The location of the boundary between the hadronic and partonic regions in the (μB,T) phase diagram, and the possible existence of a critical point, remains largely speculative. The search for a critical point using the moments of the particle multiplicity distributions in tightly centrality constrained event samples makes the tacit assumption that the variances in the (μB,T) values in these samples is sufficiently small to tightly localize the events in the phase diagram. This and other aspects were explored in simulations by coupling the UrQMD transport model to the statistical model code Thermus. The phase diagram trajectories of individual events versus the time in fm/c was calculated versus the centrality and beam energy. The variances of the (μB,T) values at freeze-out, even in narrow centrality bins, are seen to be relatively large. This suggests that a new way to constrain the events on the phase diagram may lead to more sensitive searches for the possible critical point.

  9. Statistical modelling of network panel data: goodness of fit.

    PubMed

    Schweinberger, Michael

    2012-05-01

    Networks of relationships between individuals influence individual and collective outcomes and are therefore of interest in social psychology, sociology, the health sciences, and other fields. We consider network panel data, a common form of longitudinal network data. In the framework of estimating functions, which includes the method of moments as well as the method of maximum likelihood, we propose score-type tests. The score-type tests share with other score-type tests, including the classic goodness-of-fit test of Pearson, the property that the score-type tests are based on comparing the observed value of a function of the data to values predicted by a model. The score-type tests are most useful in forward model selection and as tests of homogeneity assumptions, and possess substantial computational advantages. We derive one-step estimators which are useful as starting values of parameters in forward model selection and therefore complement the usefulness of the score-type tests. The finite-sample behaviour of the score-type tests is studied by Monte Carlo simulation and compared to t-type tests.

  10. Caloric curves fitted by polytropic distributions in the HMF model

    NASA Astrophysics Data System (ADS)

    Campa, Alessandro; Chavanis, Pierre-Henri

    2013-04-01

    We perform direct numerical simulations of the Hamiltonian mean field (HMF) model starting from non-magnetized initial conditions with a velocity distribution that is (i) Gaussian; (ii) semi-elliptical, and (iii) waterbag. Below a critical energy E c , depending on the initial condition, this distribution is Vlasov dynamically unstable. The system undergoes a process of violent relaxation and quickly reaches a quasi-stationary state (QSS). We find that the distribution function of this QSS can be conveniently fitted by a polytrope with index (i) n = 2; (ii) n = 1; and (iii) n = 1/2. Using the values of these indices, we are able to determine the physical caloric curve T kin ( E) and explain the negative kinetic specific heat region C kin = dE/ d T kin < 0 observed in the numerical simulations. At low energies, we find that the system has a "core-halo" structure. The core corresponds to the pure polytrope discussed above but it is now surrounded by a halo of particles. In case (iii), we recover the "uniform" core-halo structure previously found by Pakter and Levin [Phys. Rev. Lett. 106, 200603 (2011)]. We also consider unsteady initial conditions with magnetization M 0 = 1 and isotropic waterbag velocity distribution and report the complex dynamics of the system creating phase space holes and dense filaments. We show that the kinetic caloric curve is approximately constant, corresponding to a polytrope with index n 0 ≃ 3.56 (we also mention the presence of an unexpected hump). Finally, we consider the collisional evolution of an initially Vlasov stable distribution, and show that the time-evolving distribution function f( θ,v,t) can be fitted by a sequence of polytropic distributions with a time-dependent index n( t) both in the non-magnetized and magnetized regimes. These numerical results show that polytropic distributions (also called Tsallis distributions) provide in many cases a good fit of the QSSs. They may even be the rule rather than the exception

  11. Modelling age and secular differences in fitness between basketball players.

    PubMed

    Drinkwater, Eric J; Hopkins, Will G; McKenna, Michael J; Hunt, Patrick H; Pyne, David B

    2007-06-01

    Concerns about the value of physical testing and apparently declining test performance in junior basketball players prompted this retrospective study of trends in anthropometric and fitness test scores related to recruitment age and recruitment year. The participants were 1011 females and 1087 males entering Basketball Australia's State and National programmes (1862 and 236 players, respectively). Players were tested on 2.6 +/- 2.0 (mean +/- s) occasions over 0.8 +/- 1.0 year. Test scores were adjusted to recruitment age (14-19 years) and recruitment year (1996-2003) using mixed modelling. Effects were estimated by log transformation and expressed as standardized (Cohen) differences in means. National players scored more favourably than State players on all tests, with the differences being generally small (standardized differences, 0.2-0.6) or moderate (0.6-1.2). On all tests, males scored more favourably than females, with large standardized differences (>1.2). Athletes entering at age 16 performed at least moderately better than athletes entering at age 14 on most tests (standardized differences, 0.7-2.1), but test scores often plateaued or began to deteriorate at around 17 years. Some fitness scores deteriorated over the 8-year period, most notably a moderate increase in sprint time and moderate (National male) to large (National female) declines in shuttle run performance. Variation in test scores between National players was generally less than that between State players (ratio of standard deviations, 0.83-1.18). More favourable means and lower variability in athletes of a higher standard highlight the potential utility of these tests in junior basketball programmes, although secular declines should be a major concern of Australian basketball coaches.

  12. RNA Virus Evolution via a Fitness-Space Model

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev S.; Levine, Herbert; Kessler, David A.

    1996-06-01

    We present a mean-field theory for the evolution of RNA virus populations. The theory operates with a distribution of the population in a one-dimensional fitness space, and is valid for sufficiently smooth fitness landscapes. Our approach explains naturally the recent experimental observation [I. S. Novella et al., Proc. Natl. Acad. Sci. U.S.A. 92, 5841-5844 (1995)] of two distinct stages in the growth of virus fitness.

  13. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.

  14. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  15. A Comparison of Four Estimators of a Population Measure of Model Fit in Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Zhang, Wei

    2008-01-01

    A major issue in the utilization of covariance structure analysis is model fit evaluation. Recent years have witnessed increasing interest in various test statistics and so-called fit indexes, most of which are actually based on or closely related to F[subscript 0], a measure of model fit in the population. This study aims to provide a systematic…

  16. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having various conditions…

  17. Atomic Models for Motional Stark Effects Diagnostics

    SciTech Connect

    Gu, M F; Holcomb, C; Jayakuma, J; Allen, S; Pablant, N A; Burrell, K

    2007-07-26

    We present detailed atomic physics models for motional Stark effects (MSE) diagnostic on magnetic fusion devices. Excitation and ionization cross sections of the hydrogen or deuterium beam traveling in a magnetic field in collisions with electrons, ions, and neutral gas are calculated in the first Born approximation. The density matrices and polarization states of individual Stark-Zeeman components of the Balmer {alpha} line are obtained for both beam into plasma and beam into gas models. A detailed comparison of the model calculations and the MSE polarimetry and spectral intensity measurements obtained at the DIII-D tokamak is carried out. Although our beam into gas models provide a qualitative explanation for the larger {pi}/{sigma} intensity ratios and represent significant improvements over the statistical population models, empirical adjustment factors ranging from 1.0-2.0 must still be applied to individual line intensities to bring the calculations into full agreement with the observations. Nevertheless, we demonstrate that beam into gas measurements can be used successfully as calibration procedures for measuring the magnetic pitch angle through {pi}/{sigma} intensity ratios. The analyses of the filter-scan polarization spectra from the DIII-D MSE polarimetry system indicate unknown channel and time dependent light contaminations in the beam into gas measurements. Such contaminations may be the main reason for the failure of beam into gas calibration on MSE polarimetry systems.

  18. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment

    PubMed Central

    Joseph, Agnel Praveen; Malhotra, Sony; Burnley, Tom; Wood, Chris; Clare, Daniel K.; Winn, Martyn; Topf, Maya

    2016-01-01

    As the resolutions of Three Dimensional Electron Microscopic reconstructions of biological macromolecules are being improved, there is a need for better fitting and refinement methods at high resolutions and robust approaches for model assessment. Flex-EM/MODELLER has been used for flexible fitting of atomic models in intermediate-to-low resolution density maps of different biological systems. Here, we demonstrate the suitability of the method to successfully refine structures at higher resolutions (2.5–4.5 Å) using both simulated and experimental data, including a newly processed map of Apo-GroEL. A hierarchical refinement protocol was adopted where the rigid body definitions are relaxed and atom displacement steps are reduced progressively at successive stages of refinement. For the assessment of local fit, we used the SMOC (segment-based Manders’ overlap coefficient) score, while the model quality was checked using the Qmean score. Comparison of SMOC profiles at different stages of refinement helped in detecting regions that are poorly fitted. We also show how initial model errors can have significant impact on the goodness-of-fit. Finally, we discuss the implementation of Flex-EM in the CCP-EM software suite. PMID:26988127

  19. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment.

    PubMed

    Joseph, Agnel Praveen; Malhotra, Sony; Burnley, Tom; Wood, Chris; Clare, Daniel K; Winn, Martyn; Topf, Maya

    2016-05-01

    As the resolutions of Three Dimensional Electron Microscopic reconstructions of biological macromolecules are being improved, there is a need for better fitting and refinement methods at high resolutions and robust approaches for model assessment. Flex-EM/MODELLER has been used for flexible fitting of atomic models in intermediate-to-low resolution density maps of different biological systems. Here, we demonstrate the suitability of the method to successfully refine structures at higher resolutions (2.5-4.5Å) using both simulated and experimental data, including a newly processed map of Apo-GroEL. A hierarchical refinement protocol was adopted where the rigid body definitions are relaxed and atom displacement steps are reduced progressively at successive stages of refinement. For the assessment of local fit, we used the SMOC (segment-based Manders' overlap coefficient) score, while the model quality was checked using the Qmean score. Comparison of SMOC profiles at different stages of refinement helped in detecting regions that are poorly fitted. We also show how initial model errors can have significant impact on the goodness-of-fit. Finally, we discuss the implementation of Flex-EM in the CCP-EM software suite.

  20. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment.

    PubMed

    Joseph, Agnel Praveen; Malhotra, Sony; Burnley, Tom; Wood, Chris; Clare, Daniel K; Winn, Martyn; Topf, Maya

    2016-05-01

    As the resolutions of Three Dimensional Electron Microscopic reconstructions of biological macromolecules are being improved, there is a need for better fitting and refinement methods at high resolutions and robust approaches for model assessment. Flex-EM/MODELLER has been used for flexible fitting of atomic models in intermediate-to-low resolution density maps of different biological systems. Here, we demonstrate the suitability of the method to successfully refine structures at higher resolutions (2.5-4.5Å) using both simulated and experimental data, including a newly processed map of Apo-GroEL. A hierarchical refinement protocol was adopted where the rigid body definitions are relaxed and atom displacement steps are reduced progressively at successive stages of refinement. For the assessment of local fit, we used the SMOC (segment-based Manders' overlap coefficient) score, while the model quality was checked using the Qmean score. Comparison of SMOC profiles at different stages of refinement helped in detecting regions that are poorly fitted. We also show how initial model errors can have significant impact on the goodness-of-fit. Finally, we discuss the implementation of Flex-EM in the CCP-EM software suite. PMID:26988127

  1. Atomic force microscopy of model lipid membranes.

    PubMed

    Morandat, Sandrine; Azouzi, Slim; Beauvais, Estelle; Mastouri, Amira; El Kirat, Karim

    2013-02-01

    Supported lipid bilayers (SLBs) are biomimetic model systems that are now widely used to address the biophysical and biochemical properties of biological membranes. Two main methods are usually employed to form SLBs: the transfer of two successive monolayers by Langmuir-Blodgett or Langmuir-Schaefer techniques, and the fusion of preformed lipid vesicles. The transfer of lipid films on flat solid substrates offers the possibility to apply a wide range of surface analytical techniques that are very sensitive. Among them, atomic force microscopy (AFM) has opened new opportunities for determining the nanoscale organization of SLBs under physiological conditions. In this review, we first focus on the different protocols generally employed to prepare SLBs. Then, we describe AFM studies on the nanoscale lateral organization and mechanical properties of SLBs. Lastly, we survey recent developments in the AFM monitoring of bilayer alteration, remodeling, or digestion, by incubation with exogenous agents such as drugs, proteins, peptides, and nanoparticles.

  2. Operation of the computer model for microenvironment atomic oxygen exposure

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  3. Project Physics Text 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Basic atomic theories are presented in this fifth unit of the Project Physics text for use by senior high students. Chemical basis of atomic models in the early years of the 18th Century is discussed n connection with Dalton's theory, atomic properties, and periodic tables. The discovery of electrons is described by using cathode rays, Millikan's…

  4. Atomic Oscillator Strengths for Stellar Atmosphere Modeling

    NASA Astrophysics Data System (ADS)

    Ruffoni, Matthew; Pickering, Juliet C.

    2015-08-01

    In order to correctly model stellar atmospheres, fundamental atomic data must be available to describe atomic lines observed in their spectra. Accurate, laboratory-measured oscillator strengths (f-values) for Fe peak elements in neutral or low-ionisation states are particularly important for determining chemical abundances.However, advances in astronomical spectroscopy in recent decades have outpaced those in laboratory astrophysics, with the latter frequently being overlooked at the planning stages of new projects. As a result, numerous big-budget astronomy projects have been, and continue to be hindered by a lack of suitable, accurately-measured reference data to permit the analysis of expensive astronomical spectra; a problem only likely to worsen in the coming decades as spectrographs at new facilities increasingly move to infrared wavelengths.At Imperial College London - and in collaboration with NIST, Wisconsin University and Lund University - we have been working with the astronomy community in an effort to provide new accurately-measured f-values for a range of projects. In particular, we have been working closely with the Gaia-ESO (GES) and SDSS-III/APOGEE surveys, both of which have discovered that many lines that would make ideal candidates for inclusion in their analyses have poorly defined f-values, or are simply absent from the database. Using high-resolution Fourier transform spectroscopy (R ~ 2,000,000) to provide atomic branching fractions, and combining these with level lifetimes measured with laser induced fluorescence, we have provided new laboratory-measured f-values for a range of Fe-peak elements, most recently including Fe I, Fe II, and V I. For strong, unblended lines, uncertainties are as low as ±0.02 dex.In this presentation, I will describe how experimental f-values are obtained in the laboratory and present our recent work for GES and APOGEE. In particular, I will also discuss the strengths and limitations of current laboratory

  5. An atomic model for neutral and singly ionized uranium

    NASA Technical Reports Server (NTRS)

    Maceda, E. L.; Miley, G. H.

    1979-01-01

    A model for the atomic levels above ground state in neutral, U(0), and singly ionized, U(+), uranium is described based on identified atomic transitions. Some 168 states in U(0) and 95 in U(+) are found. A total of 1581 atomic transitions are used to complete this process. Also discussed are the atomic inverse lifetimes and line widths for the radiative transitions as well as the electron collisional cross sections.

  6. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    ERIC Educational Resources Information Center

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  7. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  8. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  9. Atomic Forces for Geometry-Dependent Point Multipole and Gaussian Multipole Models

    PubMed Central

    Elking, Dennis M.; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G.

    2010-01-01

    In standard treatments of atomic multipole models, interaction energies, total molecular forces, and total molecular torques are given for multipolar interactions between rigid molecules. However, if the molecules are assumed to be flexible, two additional multipolar atomic forces arise due to 1) the transfer of torque between neighboring atoms, and 2) the dependence of multipole moment on internal geometry (bond lengths, bond angles, etc.) for geometry-dependent multipole models. In the current study, atomic force expressions for geometry-dependent multipoles are presented for use in simulations of flexible molecules. The atomic forces are derived by first proposing a new general expression for Wigner function derivatives ∂Dlm′m/∂Ω. The force equations can be applied to electrostatic models based on atomic point multipoles or Gaussian multipole charge density. Hydrogen bonded dimers are used to test the inter-molecular electrostatic energies and atomic forces calculated by geometry-dependent multipoles fit to the ab initio electrostatic potential (ESP). The electrostatic energies and forces are compared to their reference ab initio values. It is shown that both static and geometry-dependent multipole models are able to reproduce total molecular forces and torques with respect to ab initio, while geometry-dependent multipoles are needed to reproduce ab initio atomic forces. The expressions for atomic force can be used in simulations of flexible molecules with atomic multipoles. In addition, the results presented in this work should lead to further development of next generation force fields composed of geometry-dependent multipole models. PMID:20839297

  10. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  11. Cardiorespiratory Fitness. Role Modeling by P.E. Instructors.

    ERIC Educational Resources Information Center

    Whitley, Jim D.; And Others

    1988-01-01

    A survey determining the extent to which high school physical education teachers offered cardiorespiratory instruction found that more teachers than not regularly provided such instruction, with female teachers more likely to offer instruction than males. Physical fitness levels of the teachers did not appear to affect the amount of instruction…

  12. A Comparison of Model-Data Fit for Parametric and Nonparametric Item Response Theory Models Using Ordinal-Level Ratings

    ERIC Educational Resources Information Center

    Dyehouse, Melissa A.

    2009-01-01

    This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…

  13. The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.

    2006-01-01

    This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…

  14. Perturbed atoms in molecules and solids: The PATMOS model.

    PubMed

    Røeggen, Inge; Gao, Bin

    2013-09-01

    A new computational method for electronic-structure studies of molecules and solids is presented. The key element in the new model - denoted the perturbed atoms in molecules and solids model - is the concept of a perturbed atom in a complex. The basic approximation of the new model is unrestricted Hartree Fock (UHF). The UHF orbitals are localized by the Edmiston-Ruedenberg procedure. The perturbed atoms are defined by distributing the orbitals among the nuclei in such a way that the sum of the intra-atomic UHF energies has a minimum. Energy corrections with respect to the UHF energy, are calculated within the energy incremental scheme. The most important three- and four-electron corrections are selected by introducing a modified geminal approach. Test calculations are performed on N2, Li2, and parallel arrays of hydrogen atoms. The character of the perturbed atoms is illustrated by calculations on H2, CH4, and C6H6.

  15. Modelling the atomic structure of Al92U8 metallic glass.

    PubMed

    Michalik, S; Bednarcik, J; Jóvári, P; Honkimäki, V; Webb, A; Franz, H; Fazakas, E; Varga, L K

    2010-10-13

    The local atomic structure of the glassy Al(92)U(8) alloy was modelled by the reverse Monte Carlo (RMC) method, fitting x-ray diffraction (XRD) and extended x-ray absorption fine structure (EXAFS) signals. The final structural model was analysed by means of partial pair correlation functions, coordination number distributions and Voronoi tessellation. In our study we found that the most probable atomic separations between Al-Al and U-Al pairs in the glassy Al(92)U(8) alloy are 2.7 Å and 3.1 Å with coordination numbers 11.7 and 17.1, respectively. The Voronoi analysis did not support evidence of the existence of well-defined building blocks directly embedded in the amorphous matrix. The dense-random-packing model seems to be adequate for describing the connection between solvent and solute atoms. PMID:21386570

  16. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    ERIC Educational Resources Information Center

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days. It is…

  17. Atomic level modeling of the HIV capsid

    PubMed Central

    Pornillos, Owen; Ganser-Pornillos, Barbie K.; Yeager, Mark

    2010-01-01

    The mature capsids of human immunodeficiency virus type 1 (HIV-1) and other retroviruses are fullerene shells, composed of the viral CA protein, that enclose the viral genome and facilitate its delivery into new host cells1. Retroviral CA proteins contain independently-folded N-terminal and C-terminal domains (NTD and CTD) that are connected by a flexible linker2–4. The NTD forms either hexameric or pentameric rings, whereas the CTD forms symmetric homodimers that connect the rings into a hexagonal lattice3,5–13. We previously used a disulfide crosslinking strategy to enable isolation and crystallization of soluble HIV-1 CA hexamers11,14. By the same approach, we have now determined the X-ray structure of the HIV-1 CA pentamer at 2.5 Å resolution. Two mutant CA proteins with engineered disulfides at different positions (P17C/T19C and N21C/A22C) converged onto the same quaternary structure, indicating that the disulfide-crosslinked proteins recapitulate the structure of the native pentamer. Assembly of the quasi-equivalent hexamers and pentamers requires remarkably subtle rearrangements in subunit interactions, and appears to be controlled by an electrostatic switch that favors hexamers over pentamers. This study completes the gallery of sub-structures describing the components of the HIV-1 capsid and enables atomic level modeling of the complete capsid. Rigid-body rotations around two assembly interfaces appear sufficient to generate the full range of continuously varying lattice curvature in the fullerene cone. PMID:21248851

  18. Corrections to the paper {open_quotes}fitting the armitage-doll model to radiation-exposed cohorts and implications for population cancer risks{close_quotes}

    SciTech Connect

    Little, M.P.; Hawkins, M.M.; Charles, M.W.; Hildreth, N.G.

    1994-01-01

    A recent paper analyzed patterns of cancer in the Japanese atomic bomb survivors and three other groups exposed to radiation by fitting the so-called multistage model of Armitage and Doll. The paper concluded that the incidence of solid cancer could be described adequately by a model in which up to two stages affected by radiation were assumed but that the data for leukemia within the bomb survivors might not be so well fitted. This was in part because of a failure to account for the observed linear-quadratic dose response that has been observed in the Japanese cohort. It has recently come to our attention that there was a mistake in the fits of the model with two adjacent radiation-affected stages, whereby the quadratic coefficient in dose was being set to zero in all the fits. This paper provides corrections in the calculations for the model and discusses the results.

  19. "Piekara's Chair": Mechanical Model for Atomic Energy Levels.

    ERIC Educational Resources Information Center

    Golab-Meyer, Zofia

    1991-01-01

    Uses the teaching method of models or analogies, specifically the model called "Piekara's chair," to show how teaching classical mechanics can familiarize students with the notion of energy levels in atomic physics. (MDH)

  20. Proposed reference models for atomic oxygen in the terrestrial atmosphere

    NASA Technical Reports Server (NTRS)

    Llewellyn, E. J.; Mcdade, I. C.; Lockerbie, M. D.

    1989-01-01

    A provisional Atomic Oxygen Reference model was derived from average monthly ozone profiles and the MSIS-86 reference model atmosphere. The concentrations are presented in tabular form for the altitude range 40 to 130 km.

  1. The Quantum Atomic Model "Electronium": A Successful Teaching Tool.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Focuses on the quantum atomic model Electronium. Outlines the Bremen teaching approach in which this model is used, and analyzes the learning of two students as they progress through the teaching unit. (Author/MM)

  2. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  3. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  4. Performance of the Generalized S-X[Superscript 2] Item Fit Index for Polytomous IRT Models

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2008-01-01

    Orlando and Thissen's S-X[superscript 2] item fit index has performed better than traditional item fit statistics such as Yen' s Q[subscript 1] and McKinley and Mill' s G[superscript 2] for dichotomous item response theory (IRT) models. This study extends the utility of S-X[superscript 2] to polytomous IRT models, including the generalized partial…

  5. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  6. Developing Models: What is the Atom Really Like?

    ERIC Educational Resources Information Center

    Records, Roger M.

    1982-01-01

    Five atomic theory activities feasible for high school students to perform are described based on the following models: (1) Dalton's Uniform Sphere Model; (2) Thomson's Raisin Pudding Model; (3) Rutherford's Nuclear Model; (4) Bohr's Energy Level Model, and (5) Orbital Model from quantum mechanics. (SK)

  7. TRANSIT MODEL FITTING IN THE KEPLER SCIENCE OPERATIONS CENTER PIPELINE: NEW FEATURES AND PERFORMANCE

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, C. J.; Jenkins, J. M.; Quintana, E. V.; Rowe, J. F.; Seader, S. E.; Tenenbaum, P.; Twicken, J. D.

    2013-10-01

    We describe new transit model fitting features and performance of the latest release (9.1, July 2013) of the Kepler Science Operations Center (SOC) Pipeline. The targets for which a Threshold Crossing Event (TCE) is generated in the Transiting Planet Search (TPS) component of the pipeline are subsequently processed in the Data Validation (DV) component. Transit model parameters are fitted in DV to transit-like signatures in the light curves of the targets with TCEs. The transit model fitting results are used in diagnostic tests in DV, which help to validate planet candidates and identify false positive detections. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. Light curves for many targets do not contain enough information to uniquely determine the impact parameter, which results in poor convergence performance of the fitter. In the latest release of the Kepler SOC pipeline, a reduced parameter fit is included in DV: the impact parameter is set to a fixed value and the four remaining parameters are fitted. The standard transit model fit is implemented after a series of reduced parameter fits in which the impact parameter is varied between 0 and 1. Initial values for the standard transit model fit parameters are determined by the reduced parameter fit with the minimum chi-square metric. With reduced parameter fits, the robustness of the transit model fit is improved significantly. Diagnostic plots of the chi-square metrics and reduced parameter fit results illustrate how the fitted parameters vary as a function of impact parameter. Essentially, a family of transiting planet characteristics is determined in DV for each Pipeline TCE. Transit model fitting performance of release 9.1 of the Kepler SOC pipeline is demonstrated with the results of the processing of 16 quarters of flight data

  8. Early atomic models - from mechanical to quantum (1904-1913)

    NASA Astrophysics Data System (ADS)

    Baily, C.

    2013-01-01

    A complete history of early atomic models would fill volumes, but a reasonably coherent tale of the path from mechanical atoms to the quantum can be told by focusing on the relevant work of three great contributors to atomic physics, in the critically important years between 1904 and 1913: J.J. Thomson, Ernest Rutherford and Niels Bohr. We first examine the origins of Thomson's mechanical atomic models, from his ethereal vortex atoms in the early 1880's, to the myriad "corpuscular" atoms he proposed following the discovery of the electron in 1897. Beyond qualitative predictions for the periodicity of the elements, the application of Thomson's atoms to problems in scattering and absorption led to quantitative predictions that were confirmed by experiments with high-velocity electrons traversing thin sheets of metal. Still, the much more massive and energetic α-particles being studied by Rutherford were better suited for exploring the interior of the atom, and careful measurements on the angular dependence of their scattering eventually allowed him to infer the existence of an atomic nucleus. Niels Bohr was particularly troubled by the radiative instability inherent to any mechanical atom, and succeeded in 1913 where others had failed in the prediction of emission spectra, by making two bold hypotheses that were in contradiction to the laws of classical physics, but necessary in order to account for experimental facts.

  9. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  10. 100th anniversary of Bohr's model of the atom.

    PubMed

    Schwarz, W H Eugen

    2013-11-18

    In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed.

  11. Project Physics Tests 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 5 are presented in this booklet. Included are 70 multiple-choice and 23 problem-and-essay questions. Concepts of atomic model are examined on aspects of relativistic corrections, electron emission, photoelectric effects, Compton effect, quantum theories, electrolysis experiments, atomic number and mass,…

  12. 100th anniversary of Bohr's model of the atom.

    PubMed

    Schwarz, W H Eugen

    2013-11-18

    In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed. PMID:24123759

  13. Assessing Fit of Cognitive Diagnostic Models: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Almond, Russell G.

    2007-01-01

    A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

  14. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  15. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  16. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  17. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness.

  18. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness. PMID:25034338

  19. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a convenient…

  20. The Relation among Fit Indexes, Power, and Sample Size in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Kim, Kevin H.

    2005-01-01

    The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…

  1. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  2. Modeling chromatic instrumental effects for a better model fitting of optical interferometric data

    NASA Astrophysics Data System (ADS)

    Tallon, M.; Tallon-Bosc, I.; Chesneau, O.; Dessart, L.

    2014-07-01

    Current interferometers often collect data simultaneously in many spectral channels by using dispersed fringes. Such polychromatic data provide powerful insights in various physical properties, where the observed objects show particular spectral features. Furthermore, one can measure spectral differential visibilities that do not directly depend on any calibration by a reference star. But such observations may be sensitive to instrumental artifacts that must be taken into account in order to fully exploit the polychromatic information of interferometric data. As a specimen, we consider here an observation of P Cygni with the VEGA visible combiner on CHARA interferometer. Indeed, although P Cygni is particularly well modeled by the radiative transfer code CMFGEN, we observe questionable discrepancies between expected and actual interferometric data. The problem is to determine their origin and disentangle possible instrumental effects from the astrophysical information. By using an expanded model fitting, which includes several instrumental features, we show that the differential visibilities are well explained by instrumental effects that could be otherwise attributed to the object. Although this approach leads to more reliable results, it assumes a fit specific to a particular instrument, and makes it more difficult to develop a generic model fitting independent of any instrument.

  3. Molecule-specific determination of atomic polarizabilities with the polarizable atomic multipole model.

    PubMed

    Woo Kim, Hyun; Rhee, Young Min

    2012-07-30

    Recently, many polarizable force fields have been devised to describe induction effects between molecules. In popular polarizable models based on induced dipole moments, atomic polarizabilities are the essential parameters and should be derived carefully. Here, we present a parameterization scheme for atomic polarizabilities using a minimization target function containing both molecular and atomic information. The main idea is to adopt reference data only from quantum chemical calculations, to perform atomic polarizability parameterizations even when relevant experimental data are scarce as in the case of electronically excited molecules. Specifically, our scheme assigns the atomic polarizabilities of any given molecule in such a way that its molecular polarizability tensor is well reproduced. We show that our scheme successfully works for various molecules in mimicking dipole responses not only in ground states but also in valence excited states. The electrostatic potential around a molecule with an externally perturbing nearby charge also exhibits a near-quantitative agreement with the reference data from quantum chemical calculations. The limitation of the model with isotropic atoms is also discussed to examine the scope of its applicability.

  4. Fringe Fitting

    NASA Astrophysics Data System (ADS)

    Cotton, W. D.

    Fringe Fitting Theory; Correlator Model Delay Errors; Fringe Fitting Techniques; Baseline; Baseline with Closure Constraints; Global; Solution Interval; Calibration Sources; Source Structure; Phase Referencing; Multi-band Data; Phase-Cals; Multi- vs. Single-band Delay; Sidebands; Filtering; Establishing a Common Reference Antenna; Smoothing and Interpolating Solutions; Bandwidth Synthesis; Weights; Polarization; Fringe Fitting Practice; Phase Slopes in Time and Frequency; Phase-Cals; Sidebands; Delay and Rate Fits; Signal-to-Noise Ratios; Delay and Rate Windows; Details of Global Fringe Fitting; Multi- and Single-band Delays; Phase-Cal Errors; Calibrator Sources; Solution Interval; Weights; Source Model; Suggested Procedure; Bandwidth Synthesis

  5. Diploid biological evolution models with general smooth fitness landscapes and recombination.

    PubMed

    Saakian, David B; Kirakosyan, Zara; Hu, Chin-Kun

    2008-06-01

    Using a Hamilton-Jacobi equation approach, we obtain analytic equations for steady-state population distributions and mean fitness functions for Crow-Kimura and Eigen-type diploid biological evolution models with general smooth hypergeometric fitness landscapes. Our numerical solutions of diploid biological evolution models confirm the analytic equations obtained. We also study the parallel diploid model for the simple case of recombination and calculate the variance of distribution, which is consistent with numerical results. PMID:18643300

  6. An Experimentally Determined Evolutionary Model Dramatically Improves Phylogenetic Fit

    PubMed Central

    Bloom, Jesse D.

    2014-01-01

    All modern approaches to molecular phylogenetics require a quantitative model for how genes evolve. Unfortunately, existing evolutionary models do not realistically represent the site-heterogeneous selection that governs actual sequence change. Attempts to remedy this problem have involved augmenting these models with a burgeoning number of free parameters. Here, I demonstrate an alternative: Experimental determination of a parameter-free evolutionary model via mutagenesis, functional selection, and deep sequencing. Using this strategy, I create an evolutionary model for influenza nucleoprotein that describes the gene phylogeny far better than existing models with dozens or even hundreds of free parameters. Emerging high-throughput experimental strategies such as the one employed here provide fundamentally new information that has the potential to transform the sensitivity of phylogenetic and genetic analyses. PMID:24859245

  7. Fitting Partially Nonlinear Random Coefficient Models as SEMs

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Cudeck, Robert; du Toit, Stephen H. C.

    2006-01-01

    The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model…

  8. Fitting and Testing Conditional Multinormal Partial Credit Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item scores,…

  9. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope.

    PubMed

    Sader, John E; Yousefi, Morteza; Friend, James R

    2014-02-01

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  10. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    SciTech Connect

    Sader, John E.; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  11. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  12. Fitting degradation of shoreline scarps by a nonlinear diffusion model

    USGS Publications Warehouse

    Andrews, D.J.; Buckna, R.C.

    1987-01-01

    The diffusion model of degradation of topographic features is a promising means by which vertical offsets on Holocene faults might be dated. In order to calibrate the method, we have examined present-day profiles of wave-cut shoreline scarps of late Pleistocene lakes Bonneville and Lahontan. A table is included that allows easy application of the model to scarps with simple initial shape. -from Authors

  13. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  14. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2010-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  15. Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.

    2014-06-01

    We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and

  16. A no-scale inflationary model to fit them all

    SciTech Connect

    Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu

    2014-08-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic m{sup 2} φ{sup 2}/2 potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio r that is highly consistent with the Starobinsky R +R{sup 2} model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction n{sub s} ≅ 0.96.

  17. Fitness model for the Italian interbank money market

    NASA Astrophysics Data System (ADS)

    de Masi, G.; Iori, G.; Caldarelli, G.

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto’s law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  18. Fitness model for the Italian interbank money market.

    PubMed

    De Masi, G; Iori, G; Caldarelli, G

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  19. Using proper regression methods for fitting the Langmuir model to sorption data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model, originally developed for the study of gas sorption to surfaces, is one of the most commonly used models for fitting phosphorus sorption data. There are good theoretical reasons, however, against applying this model to describe P sorption to soils. Nevertheless, the Langmuir model...

  20. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  1. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  2. Tokamak plasma modelling and atomic processes

    NASA Astrophysics Data System (ADS)

    Kawamura, T.

    1986-06-01

    Topics addressed include: particle control in a tokomak device; ionizing and recombining plasmas; effects of data accuracy on tokamak impurity transport modeling; plasma modeling of tokamaks; and ultraviolet and X-ray spectroscopy of tokamak plasmas.

  3. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  4. On assessing model fit for distribution-free longitudinal models under missing data.

    PubMed

    Wu, P; Tu, X M; Kowalski, J

    2014-01-15

    The generalized estimating equation (GEE), a distribution-free, or semi-parametric, approach for modeling longitudinal data, is used in a wide range of behavioral, psychotherapy, pharmaceutical drug safety, and healthcare-related research studies. Most popular methods for assessing model fit are based on the likelihood function for parametric models, rendering them inappropriate for distribution-free GEE. One rare exception is a score statistic initially proposed by Tsiatis for logistic regression (1980) and later extended by Barnhart and Willamson to GEE (1998). Because GEE only provides valid inference under the missing completely at random assumption and missing values arising in most longitudinal studies do not follow such a restricted mechanism, this GEE-based score test has very limited applications in practice. We propose extensions of this goodness-of-fit test to address missing data under the missing at random assumption, a more realistic model that applies to most studies in practice. We examine the performance of the proposed tests using simulated data and demonstrate the utilities of such tests with data from a real study on geriatric depression and associated medical comorbidities. PMID:23897653

  5. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  6. Nagaoka’s atomic model and hyperfine interactions

    PubMed Central

    INAMURA, Takashi T.

    2016-01-01

    The prevailing view of Nagaoka’s “Saturnian” atom is so misleading that today many people have an erroneous picture of Nagaoka’s vision. They believe it to be a system involving a ‘giant core’ with electrons circulating just outside. Actually, though, in view of the Coulomb potential related to the atomic nucleus, Nagaoka’s model is exactly the same as Rutherford’s. This is true of the Bohr atom, too. To give proper credit, Nagaoka should be remembered together with Rutherford and Bohr in the history of the atomic model. It is also pointed out that Nagaoka was a pioneer of understanding hyperfine interactions in order to study nuclear structure. PMID:27063182

  7. Goodness-of-fit test for proportional subdistribution hazards model.

    PubMed

    Zhou, Bingqing; Fine, Jason; Laird, Glen

    2013-09-30

    This paper concerns using modified weighted Schoenfeld residuals to test the proportionality of subdistribution hazards for the Fine-Gray model, similar to the tests proposed by Grambsch and Therneau for independently censored data. We develop a score test for the time-varying coefficients based on the modified Schoenfeld residuals derived assuming a certain form of non-proportionality. The methods perform well in simulations and a real data analysis of breast cancer data, where the treatment effect exhibits non-proportional hazards.

  8. CPOPT : optimization for fitting CANDECOMP/PARAFAC models.

    SciTech Connect

    Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim

    2008-10-01

    Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

  9. Adaptation in tunably rugged fitness landscapes: the rough Mount Fuji model.

    PubMed

    Neidhart, Johannes; Szendro, Ivan G; Krug, Joachim

    2014-10-01

    Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage.

  10. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  11. Are pollination "syndromes" predictive? Asian dalechampia fit neotropical models.

    PubMed

    Armbruster, W Scott; Gong, Yan-Bing; Huang, Shuang-Quan

    2011-07-01

    Using pollination syndrome parameters and pollinator correlations with floral phenotype from the Neotropics, we predicted that Dalechampia bidentata Blume (Euphorbiaceae) in southern China would be pollinated by female resin-collecting bees between 12 and 20 mm in length. Observations in southwestern Yunnan Province, China, revealed pollination primarily by resin-collecting female Megachile (Callomegachile) faceta Bingham (Hymenoptera: Megachilidae). These bees, at 14 mm in length, were in the predicted size range, confirming the utility of syndromes and models developed in distant regions. Phenotypic selection analyses and estimation of adaptive surfaces and adaptive accuracies together suggest that the blossoms of D. bidentata are well adapted to pollination by their most common floral visitors. PMID:21670584

  12. Fitting measurement models to vocational interest data: are dominance models ideal?

    PubMed

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  13. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  14. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  15. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  16. Surface Adsorption in Nonpolarizable Atomic Models.

    PubMed

    Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J

    2014-12-01

    Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations.

  17. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    NASA Astrophysics Data System (ADS)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.

  18. Transit Model Fitting in Processing Four Years of Kepler Science Data: New Features and Performance

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher; Jenkins, Jon Michael; Quintana, Elisa; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph

    2015-08-01

    We present new transit model fitting features and performance of the latest release (9.3, March 2015) of the Kepler Science Operations Center (SOC) Pipeline, which will be used for the final processing of four years of Kepler science data later this year. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate the planet detections. The standard limb-darkened transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is also fitted by a trapezoidal transit model with four parameters: transit epoch time, depth, duration and ratio of ingress time to duration. The fitted trapezoidal transit model is used in the diagnostic tests when the fit with the standard transit model fails or when the fit is not performed, e.g. for suspected eclipsing binaries. Additional parameters, such as the equilibrium temperature and effective stellar flux (i.e. insolation) of the planet candidate, are derived from the transit model fit parameters to characterize pipeline candidates for the search of Earth-size planets in the habitable zone. The uncertainties of all derived parameters are updated in the latest codebase to account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting for the TCEs identified by the Kepler SOC Pipeline are included in the DV reports and one-page report summaries, which are accessible by the science community at NASA Exoplanet Archive

  19. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  20. Phenomenological model of spin crossover in molecular crystals as derived from atom-atom potentials.

    PubMed

    Sinitskiy, Anton V; Tchougréeff, Andrei L; Dronskowski, Richard

    2011-08-01

    The method of atom-atom potentials, previously applied to the analysis of pure molecular crystals formed by either low-spin (LS) or high-spin (HS) forms (spin isomers) of Fe(II) coordination compounds (Sinitskiy et al., Phys. Chem. Chem. Phys., 2009, 11, 10983), is used to estimate the lattice enthalpies of mixed crystals containing different fractions of the spin isomers. The crystals under study were formed by LS and HS isomers of Fe(phen)(2)(NCS)(2) (phen = 1,10-phenanthroline), Fe(btz)(2)(NCS)(2) (btz = 5,5',6,6'-tetrahydro-4H,4'H-2,2'-bi-1,3-thiazine), and Fe(bpz)(2)(bipy) (bpz = dihydrobis(1-pyrazolil)borate, and bipy = 2,2'-bipyridine). For the first time the phenomenological parameters Γ pertinent to the Slichter-Drickamer model (SDM) of several materials were independently derived from the microscopic model of the crystals with use of atom-atom potentials of intermolecular interaction. The accuracy of the SDM was checked against the numerical data on the enthalpies of mixed crystals. Fair semiquantitative agreement with the experimental dependence of the HS fraction on temperature was achieved with use of these values. Prediction of trends in Γ values as a function of chemical composition and geometry of the crystals is possible with the proposed approach, which opens a way to rational design of spin crossover materials with desired properties.

  1. Evaluating the use of 'goodness-of-fit' measures in hydrologic and hydroclimatic model validation

    USGS Publications Warehouse

    Legates, D.R.; McCabe, G.J.

    1999-01-01

    Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the 'goodness-of-fit' of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and that additional evaluation measures (such as summary statistics and absolute error measures) should supplement model evaluation tools.Correlation and correlation-based measures (e.g., the coefficient of determination) have been widely used to evaluate the `goodness-of-fit' of hydrologic and hydroclimatic models. These measures are oversensitive to extreme values (outliers) and are insensitive to additive and proportional differences between model predictions and observations. Because of these limitations, correlation-based measures can indicate that a model is a good predictor, even when it is not. In this paper, useful alternative goodness-of-fit or relative error measures (including the coefficient of efficiency and the index of agreement) that overcome many of the limitations of correlation-based measures are discussed. Modifications to these statistics to aid in interpretation are presented. It is concluded that correlation and correlation-based measures should not be used to assess the goodness-of-fit of a hydrologic or hydroclimatic model and

  2. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    PubMed Central

    Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN), intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue. PMID:23935704

  3. Physically representative atomistic modeling of atomic-scale friction

    NASA Astrophysics Data System (ADS)

    Dong, Yalin

    Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the

  4. Atomic-accuracy models from 4.5-Å cryo-electron microscopy data with density-guided iterative local refinement.

    PubMed

    DiMaio, Frank; Song, Yifan; Li, Xueming; Brunner, Matthias J; Xu, Chunfu; Conticello, Vincent; Egelman, Edward; Marlovits, Thomas C; Cheng, Yifan; Baker, David

    2015-04-01

    We describe a general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution. The method integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting. In tests on experimental maps of three different systems with 4.5-Å resolution or better, the method consistently produced models with atomic-level accuracy largely independently of starting-model quality, and it outperformed the molecular dynamics-based MDFF method. Cross-validated model quality statistics correlated with model accuracy over the three test systems.

  5. Modeling of Turbulence Effects on Liquid Jet Atomization and Breakup

    NASA Technical Reports Server (NTRS)

    Trinh, Huu; Chen, C. P.

    2004-01-01

    Recent experimental investigations and physical modeling studies have indicated that turbulence behaviors within a liquid jet have considerable effects on the atomization process. For certain flow regimes, it has been observed that the liquid jet surface is highly turbulent. This turbulence characteristic plays a key role on the breakup of the liquid jet near to the injector exit. Other experiments also showed that the breakup length of the liquid core is sharply shortened as the liquid jet is changed from the laminar to the turbulent flow conditions. In the numerical and physical modeling arena, most of commonly used atomization models do not include the turbulence effect. Limited attempts have been made in modeling the turbulence phenomena on the liquid jet disintegration. The subject correlation and models treat the turbulence either as an only source or a primary driver in the breakup process. This study aims to model the turbulence effect in the atomization process of a cylindrical liquid jet. In the course of this study, two widely used models, Reitz's primary atomization (blob) and Taylor-Analogy-Break (TAB) secondary droplet breakup by O Rourke et al. are examined. Additional terms are derived and implemented appropriately into these two models to account for the turbulence effect on the atomization process. Since this enhancement effort is based on a framework of the two existing atomization models, it is appropriate to denote the two present models as T-blob and T-TAB for the primary and secondary atomization predictions, respectively. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic time scales and the initial flow conditions. This treatment offers a balance of contributions of individual physical phenomena on the liquid breakup process. For the secondary breakup, an addition turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. The drop size

  6. Is Model Fitting Necessary for Model-Based fMRI?

    PubMed

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models. PMID:26086934

  7. Is Model Fitting Necessary for Model-Based fMRI?

    PubMed

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

  8. Atomic-scale modeling of cellulose nanocrystals

    NASA Astrophysics Data System (ADS)

    Wu, Xiawa

    Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to

  9. Soft X-ray spectral fits of Geminga with model neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Meyer, R. D.; Pavlov, G. G.; Meszaros, P.

    1994-01-01

    The spectrum of the soft X-ray pulsar Geminga consists of two components, a softer one which can be interpreted as thermal-like radiation from the surface of the neutron star, and a harder one interpreted as radiation from a polar cap heated by relativistic particles. We have fitted the soft spectrum using a detailed magnetized hydrogen atmosphere model. The fitting parameters are the hydrogen column density, the effective temperature T(sub eff), the gravitational redshift z, and the distance to radius ratio, for different values of the magnetic field B. The best fits for this model are obtained when B less than or approximately 1 x 10(exp 12) G and z lies on the upper boundary of the explored range (z = 0.45). The values of T(sub eff) approximately = (2-3) x 10(exp 5) K are a factor of 2-3 times lower than the value of T(sub eff) obtained for blackbody fits with the same z. The lower T(sub eff) increases the compatibility with some proposed schemes for fast neutrino cooling of neutron stars (NSs) by the direct Urca process or by exotic matter, but conventional cooling cannot be excluded. The hydrogen atmosphere fits also imply a smaller distance to Geminga than that inferred from a blackbody fit. An accurate evaluation of the distance would require a better knowledge of the ROSAT Position Sensitive Proportional Counter (PSPC) response to the low-energy region of the incident spectrum. Our modeling of the soft component with a cooler magnetized atmosphere also implies that the hard-component fit requires a characteristic temperature which is higher (by a factor of approximately 2-3) and a surface area which is smaller (by a factor of 10(exp 3), compared to previous blackbody fits.

  10. A model to predict image formation in Atom probe Tomography.

    PubMed

    Vurpillot, F; Gaillard, A; Da Costa, G; Deconihout, B

    2013-09-01

    A model devoted to the modelling of the field evaporation of a tip is presented in this paper. The influence of length scales from the atomic scale to the macroscopic scale is taken into account in this approach. The evolution of the tip shape is modelled at the atomic scale in a three dimensional geometry with cylindrical symmetry. The projection law of ions is determined using a realistic representation of the tip geometry including the presence of electrodes in the surrounding area of the specimen. This realistic modelling gives a direct access to the voltage required to field evaporate, to the evolving magnification in the microscope and to the understanding of reconstruction artefacts when the presence of phases with different evaporation fields and/or different dielectric permittivity constants are modelled. This model has been applied to understand the field evaporation behaviour in bulk dielectric materials. In particular the role of the residual conductivity of dielectric materials is addressed.

  11. Effective microscopic models for sympathetic cooling of atomic gases

    NASA Astrophysics Data System (ADS)

    Onofrio, Roberto; Sundaram, Bala

    2015-09-01

    Thermalization of a system in the presence of a heat bath has been the subject of many theoretical investigations especially in the framework of solid-state physics. In this setting, the presence of a large bandwidth for the frequency distribution of the harmonic oscillators schematizing the heat bath is crucial, as emphasized in the Caldeira-Leggett model. By contrast, ultracold gases in atomic traps oscillate at well-defined frequencies and therefore seem to lie outside the Caldeira-Leggett paradigm. We introduce interaction Hamiltonians which allow us to adapt the model to an atomic physics framework. The intrinsic nonlinearity of these models differentiates them from the original Caldeira-Leggett model and calls for a nontrivial stability analysis to determine effective ranges for the model parameters. These models allow for molecular-dynamics simulations of mixtures of ultracold gases, which is of current relevance for optimizing sympathetic cooling in degenerate Bose-Fermi mixtures.

  12. Multiple likelihood estimation for calibration: tradeoffs in goodness-of-fit metrics for watershed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Price, K.; Purucker, T.; Kraemer, S.; Babendreier, J. E.

    2011-12-01

    Four nested sub-watersheds (21 to 10100 km^2) of the Neuse River in North Carolina are used to investigate calibration tradeoffs in goodness-of-fit metrics using multiple likelihood methods. Calibration of watershed hydrologic models is commonly achieved by optimizing a single goodness-of-fit metric to characterize simulated versus observed flows (e.g., R^2 and Nash-Sutcliffe Efficiency Coefficient, or NSE). However, each of these objective functions heavily weights a particular aspect of streamflow. For example, NSE and R^2 both emphasize high flows in evaluating simulation fit, while the Modified Nash-Sutcliffe Efficiency Coefficient (MNSE) emphasizes low flows. Other metrics, such as the ratio of the simulated versus observed flow standard deviations (SDR), prioritize overall flow variability. In this comparison, we use informal likelihood methods to investigate the tradeoffs of calibrating streamflow on three standard goodness-of-fit metrics (NSE, MNSE, and SDR), as well as an index metric that equally weights these three objective functions to address a range of flow characteristics. We present a flexible method that allows calibration targets to be determined by modeling goals. In this process, we begin by using Latin Hypercube Sampling (LHS) to reduce the simulations required to explore the full parameter space. The correlation structure of a large suite of goodness-of-fit metrics is explored to select metrics for use in an index function that incorporates a range of flow characteristics while avoiding redundancy. An iterative informal likelihood procedure is used to narrow parameter ranges after each simulation set to areas of the range with the most support from the observed data. A stopping rule is implemented to characterize the overall goodness-of-fit associated with the parameter set for each pass, with the best-fit pass distributions used as the calibrated set for the next simulation set. This process allows a great deal of flexibility. The process is

  13. FITS: A Framework for ITS--A Computational Model of Tutoring.

    ERIC Educational Resources Information Center

    Ikeda, Mitsuru; Mizoguchi, Riichiro

    1994-01-01

    Summarizes research activities concerning FITS, a Framework for Intelligent Tutoring Systems, and discusses the major results obtained thus far. Topics include system architecture; domain independent framework; student model module; expertise module; tutoring strategies; and a model of tutor's decision making, including knowledge sources and…

  14. Genetic Model Fitting in IQ, Assortative Mating & Components of IQ Variance.

    ERIC Educational Resources Information Center

    Capron, Christiane; Vetta, Adrian R.; Vetta, Atam

    1998-01-01

    The biometrical school of scientists who fit models to IQ data traces their intellectual ancestry to R. Fisher (1918), but their genetic models have no predictive value. Fisher himself was critical of the concept of heritability, because assortative mating, such as for IQ, introduces complexities into the study of a genetic trait. (SLD)

  15. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  16. Posterior predictive checks to quantify lack-of-fit in admixture models of latent population structure

    PubMed Central

    Mimno, David; Blei, David M.; Engelhardt, Barbara E.

    2015-01-01

    Admixture models are a ubiquitous approach to capture latent population structure in genetic samples. Despite the widespread application of admixture models, little thought has been devoted to the quality of the model fit or the accuracy of the estimates of parameters of interest for a particular study. Here we develop methods for validating admixture models based on posterior predictive checks (PPCs), a Bayesian method for assessing the quality of fit of a statistical model to a specific dataset. We develop PPCs for five population-level statistics of interest: within-population genetic variation, background linkage disequilibrium, number of ancestral populations, between-population genetic variation, and the downstream use of admixture parameters to correct for population structure in association studies. Using PPCs, we evaluate the quality of the admixture model fit to four qualitatively different population genetic datasets: the population reference sample (POPRES) European individuals, the HapMap phase 3 individuals, continental Indians, and African American individuals. We found that the same model fitted to different genomic studies resulted in highly study-specific results when evaluated using PPCs, illustrating the utility of PPCs for model-based analyses in large genomic studies. PMID:26071445

  17. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  18. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  19. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  20. A Short Commentary on "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Gentry, Marcia

    2010-01-01

    This article presents the author's brief comment on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib (2010) takes the reader through an interesting history of human innovation and processes and situates his theory within a productivist model. The deliberate attention to…

  1. The Expected Fitness Cost of a Mutation Fixation under the One-Dimensional Fisher Model

    NASA Astrophysics Data System (ADS)

    Zhang, Liqing; Watson, Layne T.

    This paper employs Fisher's model of adaptation to understand the expected fitness effect of fixing a mutation in a natural population. Fisher's model in one dimension admits a closed form solution for this expected fitness effect. A combination of different parameters, including the distribution of mutation lengths, population sizes, and the initial state that the population is in, are examined to see how they affect the expected fitness effect of state transitions. The results show that the expected fitness change due to the fixation of a mutation is always positive, regardless of the distributional shapes of mutation lengths, effective population sizes, and the initial state that the population is in. The further away the initial state of a population is from the optimal state, the slower the population returns to the optimal state. Effective population size (except when very small) has little effect on the expected fitness change due to mutation fixation. The always positive expected fitness change suggests that small populations may not necessarily be doomed due to the runaway process of fixation of deleterious mutations.

  2. Modeling and quantifying frequency-dependent fitness in microbial populations with cross-feeding interactions.

    PubMed

    Ribeck, Noah; Lenski, Richard E

    2015-05-01

    Coexistence of two or more populations by frequency-dependent selection is common in nature, and it often arises even in well-mixed experiments with microbes. If ecology is to be incorporated into models of population genetics, then it is important to represent accurately the functional form of frequency-dependent interactions. However, measuring this functional form is problematic for traditional fitness assays, which assume a constant fitness difference between competitors over the course of an assay. Here, we present a theoretical framework for measuring the functional form of frequency-dependent fitness by accounting for changes in abundance and relative fitness during a competition assay. Using two examples of ecological coexistence that arose in a long-term evolution experiment with Escherichia coli, we illustrate accurate quantification of the functional form of frequency-dependent relative fitness. Using a Monod-type model of growth dynamics, we show that two ecotypes in a typical cross-feeding interaction-such as when one bacterial population uses a byproduct generated by another-yields relative fitness that is linear with relative frequency.

  3. Development and design of a late-model fitness test instrument based on LabView

    NASA Astrophysics Data System (ADS)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  4. MEAMfit: A reference-free modified embedded atom method (RF-MEAM) energy and force-fitting code

    NASA Astrophysics Data System (ADS)

    Duff, Andrew Ian

    2016-06-01

    MEAMfit v1.02. Changes: various bug fixes; speed of single-shot energy and force calculations (not optimization) increased by × 10; elements up to Cn (Z = 112) now correctly read from vasprun.xml files; EAM fits now produce Camelion output files; changed max number of vasprun.xml files to 10,000 (an unnecessary lower limit of 10 was allowed in the previous version).

  5. Model based control of dynamic atomic force microscope

    SciTech Connect

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-15

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H{sub ∞} control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  6. Model based control of dynamic atomic force microscope.

    PubMed

    Lee, Chibum; Salapaka, Srinivasa M

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  7. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    PubMed

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants. PMID:26614656

  8. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    PubMed

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants.

  9. Curve fitting toxicity test data: Which comes first, the dose response or the model?

    SciTech Connect

    Gully, J.; Baird, R.; Bottomley, J.

    1995-12-31

    The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.

  10. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  11. Atomic diffusion in metal poor stars. The influence on the Main Sequence fitting distance scale, subdwarfs ages and the value of Delta Y/ Delta Z

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Groenewegen, M. A. T.; Weiss, A.

    2000-03-01

    The effect of atomic diffusion on the Main Sequence (MS) of metal-poor low mass stars is investigated. Since diffusion alters the stellar surface chemical abundances with respect to their initial values, one must ensure - by calibrating the initial chemical composition of the theoretical models - that the surface abundances of the models match the observed ones of the stellar population under scrutiny. When properly calibrated, our models with diffusion reproduce well within the errors the Hertzsprung-Russell diagram of Hipparcos subdwarfs with empirically determined T_eff values and high resolution spectroscopical [Fe/H] determinations. Since the observed surface abundances of subdwarfs are different from the initial ones due to the effect of diffusion, while the globular clusters stellar abundances are measured in Red Giants, which have practically recovered their initial abundances after the dredge-up, the isochrones to be employed for studying globular clusters and Halo subdwarfs with the same observational value of [Fe/H] are different and do not coincide. This is at odds with the basic assumption of the MS-fitting technique for distance determinations. However, the use of the rather large sample of Hipparcos lower MS subdwarfs with accurate parallaxes keeps at minimum the effect of these differences, for two reasons. First, it is possible to use subdwarfs with observed [Fe/H] values close to the cluster one; this minimizes the colour corrections (which are derived from the isochrones) needed to reduce all the subdwarfs to a mono-metallicity sequence having the same [Fe/H] than the cluster. Second, one can employ objects sufficiently faint so that the differences between the subdwarfs and cluster MS with the same observed value of [Fe/H] are small (they increase for increasing luminosity). We find therefore that the distances based on standard isochrones are basically unaltered when diffusion is taken properly into account. On the other hand, the absolute ages

  12. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included.

  13. A goodness-of-fit test for occupancy models with correlated within-season revisits

    USGS Publications Warehouse

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  14. Learning atomic human actions using variable-length Markov models.

    PubMed

    Liang, Yu-Ming; Shih, Sheng-Wen; Shih, Arthur Chun-Chieh; Liao, Hong-Yuan Mark; Lin, Cheng-Chung

    2009-02-01

    Visual analysis of human behavior has generated considerable interest in the field of computer vision because of its wide spectrum of potential applications. Human behavior can be segmented into atomic actions, each of which indicates a basic and complete movement. Learning and recognizing atomic human actions are essential to human behavior analysis. In this paper, we propose a framework for handling this task using variable-length Markov models (VLMMs). The framework is comprised of the following two modules: a posture labeling module and a VLMM atomic action learning and recognition module. First, a posture template selection algorithm, based on a modified shape context matching technique, is developed. The selected posture templates form a codebook that is used to convert input posture sequences into discrete symbol sequences for subsequent processing. Then, the VLMM technique is applied to learn the training symbol sequences of atomic actions. Finally, the constructed VLMMs are transformed into hidden Markov models (HMMs) for recognizing input atomic actions. This approach combines the advantages of the excellent learning function of a VLMM and the fault-tolerant recognition ability of an HMM. Experiments on realistic data demonstrate the efficacy of the proposed system.

  15. Atmospheric turbulence optical model (ATOM) based on fractal theory

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Scoggins, Jim; Carroll, Marvin P.

    1994-06-01

    An Atmospheric Turbulence Optical Model (ATOM) is presented that used cellular automata (CA) rules as the basis for modeling synthetic phase sheets. This method allows image fracture, scintillation and blur to be correctly models using the principle of convolution with a complex kernel derived from CA rules interaction. The model takes into account the changing distribution of turbules from micro-turbule domination at low altitudes to macro-domination at high altitudes. The wavelength of propagating images (such as a coherent laser beam) and the range are taken into account. The ATOM model is written in standard FORTRAN 77 and enables high-speed in-line calculation of atmospheric effects to be performed without resorting to computationally intensive solutions of Navier Stokes equations or Cn2 profiles.

  16. Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model

    NASA Astrophysics Data System (ADS)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2016-06-01

    A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)

  17. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope.

    PubMed

    Quan, Wei; Lv, Lin; Liu, Baiqi

    2014-11-01

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  18. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Lv, Lin; Liu, Baiqi

    2014-11-01

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  19. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    SciTech Connect

    Quan, Wei; Lv, Lin Liu, Baiqi

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  20. A Nonlinear Model for Fuel Atomization in Spray Combustion

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave

    2003-01-01

    Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.

  1. ATOMIC DATA AND SPECTRAL MODEL FOR Fe III

    SciTech Connect

    Bautista, Manuel A.; Ballance, Connor P.; Quinet, Pascal

    2010-08-01

    We present new atomic data (radiative transitions rates and collision strengths) from large-scale calculations and a non-LTE spectral model for Fe III. This model is in very good agreement with observed astronomical emission spectra, in contrast with previous models that yield large discrepancies in observations. The present atomic computations employ a combination of atomic physics methods, e.g., relativistic Hartree-Fock, the Thomas-Fermi-Dirac potential, and Dirac-Fock computation of A-values and the R-matrix with intermediate coupling frame transformation and the Dirac R-matrix. We study advantages and shortcomings of each method. It is found that the Dirac R-matrix collision strengths yield excellent agreement with observations, much improved over previously available models. By contrast, the transformation of the LS-coupling R-matrix fails to yield accurate effective collision strengths at around 10{sup 4} K, despite using very large configuration expansions, due to the limited treatment of spin-orbit effects in the near-threshold resonances of the collision strengths. The present work demonstrates that accurate atomic data for low-ionization iron-peak species are now within reach.

  2. Soluble Model of Evolution and Extinction Dynamics in a Rugged Fitness Landscape

    NASA Astrophysics Data System (ADS)

    Sibani, Paolo

    1997-08-01

    We consider a continuum version of a previously introduced and numerically studied model of macroevolution [P. Sibani, M. R. Schimdt, and P. Alstrøm, Phys. Rev. Lett. 75, 2055 (1995)] in which agents evolve by an optimization process in a rugged fitness landscape and die due to their competitive interactions. We first formulate dynamical equations for the fitness distribution and the survival probability. Secondly, we analytically derive the t-2 law which characterizes the lifetime distribution of biological genera. Thirdly, we discuss other dynamical properties of the model as the rate of extinction and conclude with a brief discussion.

  3. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    PubMed

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large.

  4. Exactly solvable models for atom-molecule Hamiltonians.

    PubMed

    Dukelsky, J; Dussel, G G; Esebbag, C; Pittel, S

    2004-07-30

    We present a family of exactly solvable generalizations of the Jaynes-Cummings model involving the interaction of an ensemble of SU(2) or SU(1,1) quasispins with a single boson field. They are obtained from the trigonometric Richardson-Gaudin models by replacing one of the SU(2) or SU(1,1) degrees of freedom by an ideal boson. The application to a system of bosonic atoms and molecules is reported.

  5. Atomic Data and Modelling for Fusion: the ADAS Project

    NASA Astrophysics Data System (ADS)

    Summers, H. P.; O'Mullane, M. G.

    2011-05-01

    The paper is an update on the Atomic Data and Analysis Structure, ADAS, since ICAM-DATA06 and a forward look to its evolution in the next five years. ADAS is an international project supporting principally magnetic confinement fusion research. It has participant laboratories throughout the world, including ITER and all its partner countries. In parallel with ADAS, the ADAS-EU Project provides enhanced support for fusion research at Associated Laboratories and Universities in Europe and ITER. OPEN-ADAS, sponsored jointly by the ADAS Project and IAEA, is the mechanism for open access to principal ADAS atomic data classes and facilitating software for their use. EXTENDED-ADAS comprises a variety of special, integrated application software, beyond the purely atomic bounds of ADAS, tuned closely to specific diagnostic analyses and plasma models. The current scientific content and scope of these various ADAS and ADAS related activities are briefly reviewed. These span a number of themes including heavy element spectroscopy and models, charge exchange spectroscopy, beam emission spectroscopy and special features which provide a broad baseline of atomic modelling and support. Emphasis will be placed on `lifting the fundamental data baseline'—a principal ADAS task for the next few years. This will include discussion of ADAS and ADAS-EU coordinated and shared activities and some of the methods being exploited.

  6. Testing the Fitness Consequences of the Thermoregulatory and Parental Care Models for the Origin of Endothermy

    PubMed Central

    Clavijo-Baque, Sabrina; Bozinovic, Francisco

    2012-01-01

    The origin of endothermy is a puzzling phenomenon in the evolution of vertebrates. To address this issue several explicative models have been proposed. The main models proposed for the origin of endothermy are the aerobic capacity, the thermoregulatory and the parental care models. Our main proposal is that to compare the alternative models, a critical aspect is to determine how strongly natural selection was influenced by body temperature, and basal and maximum metabolic rates during the evolution of endothermy. We evaluate these relationships in the context of three main hypotheses aimed at explaining the evolution of endothermy, namely the parental care hypothesis and two hypotheses related to the thermoregulatory model (thermogenic capacity and higher body temperature models). We used data on basal and maximum metabolic rates and body temperature from 17 rodent populations, and used intrinsic population growth rate (Rmax) as a global proxy of fitness. We found greater support for the thermogenic capacity model of the thermoregulatory model. In other words, greater thermogenic capacity is associated with increased fitness in rodent populations. To our knowledge, this is the first test of the fitness consequences of the thermoregulatory and parental care models for the origin of endothermy. PMID:22606328

  7. A Comparison of Isoconversional and Model-Fitting Approaches to Kinetic Parameter Estimation and Application Predictions

    SciTech Connect

    Burnham, A K

    2006-05-17

    Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.

  8. The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske

    2011-06-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.

  9. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  10. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    PubMed Central

    Velasco, Jose; Pizarro, Daniel; Macias-Guarasa, Javier

    2012-01-01

    This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP) strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies. PMID:23202021

  11. Conducting Tetrad Tests of Model Fit and Contrasts of Tetrad-Nested Models: A New SAS Macro

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.; Bollen, Kenneth A.

    2005-01-01

    This article describes a SAS macro to assess model fit of structural equation models by employing a test of the model-implied vanishing tetrads. Use of this test has been limited in the past, in part due to the lack of software that fully automates the test in a user-friendly way. The current SAS macro provides a straightforward method for…

  12. IRT Model Fit Evaluation from Theory to Practice: Progress and Some Unanswered Questions

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2013-01-01

    In this commentary, the authors congratulate Professor Alberto Maydeu-Olivares on his article [EJ1023617: "Goodness-of-Fit Assessment of Item Response Theory Models, Measurement: Interdisciplinary Research and Perspectives," this issue] as it provides a much needed overview on the mathematical underpinnings of the theory behind the…

  13. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  14. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  15. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  16. Critique of "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Harris, Carole Ruth

    2010-01-01

    This article presents the author's comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" In his article, Ghassib (2010) provides an overview of the philosophical foundations that led to exact science, its role in what was later to become a driving force in the modern…

  17. On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally

    ERIC Educational Resources Information Center

    Blozis, Shelley A.

    2007-01-01

    This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…

  18. Super Kids--Superfit. A Comprehensive Fitness Intervention Model for Elementary Schools.

    ERIC Educational Resources Information Center

    Virgilio, Stephen J.; Berenson, Gerald S.

    1988-01-01

    Objectives and activities of the cardiovascular (CV) fitness program Super Kids--Superfit are related in this article. This exercise program is one component of the Heart Smart Program, a CV health intervention model for elementary school students. Program evaluation, parent education, and school and community intervention strategies are…

  19. Small-Sample Robust Estimators of Noncentrality-Based and Incremental Model Fit

    ERIC Educational Resources Information Center

    Herzog, Walter; Boomsma, Anne

    2009-01-01

    Traditional estimators of fit measures based on the noncentral chi-square distribution (root mean square error of approximation [RMSEA], Steiger's [gamma], etc.) tend to overreject acceptable models when the sample size is small. To handle this problem, it is proposed to employ Bartlett's (1950), Yuan's (2005), or Swain's (1975) correction of the…

  20. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  1. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  2. Review of Hisham Ghassib: Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?

    ERIC Educational Resources Information Center

    Neber, Heinz

    2010-01-01

    In this article, the author presents his comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?" Ghassib (2010) describes historical transformations of science from a marginal and non-autonomous activity which had been constrained by traditions to a self-autonomous,…

  3. Fitting multilevel models with ordinal outcomes: performance of alternative specifications and methods of estimation.

    PubMed

    Bauer, Daniel J; Sterba, Sonya K

    2011-12-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ when instead fitting multilevel cumulative logit models to ordinal data, maximum likelihood (ML), or penalized quasi-likelihood (PQL). ML and PQL are compared across variations in sample size, magnitude of variance components, number of outcome categories, and distribution shape. Fitting a multilevel linear model to ordinal outcomes is shown to be inferior in virtually all circumstances. PQL performance improves markedly with the number of ordinal categories, regardless of distribution shape. In contrast to binary data, PQL often performs as well as ML when used with ordinal data. Further, the performance of PQL is typically superior to ML when the data include a small to moderate number of clusters (i.e., ≤ 50 clusters).

  4. Impact of Missing Data on Person-Model Fit and Person Trait Estimation

    ERIC Educational Resources Information Center

    Zhang, Bo; Walker, Cindy M.

    2008-01-01

    The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…

  5. A constructive model potential method for atomic interactions

    NASA Technical Reports Server (NTRS)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  6. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  7. Modeling of Turbulence Effect on Liquid Jet Atomization

    NASA Technical Reports Server (NTRS)

    Trinh, H. P.

    2007-01-01

    Recent studies indicate that turbulence behaviors within a liquid jet have considerable effect on the atomization process. Such turbulent flow phenomena are encountered in most practical applications of common liquid spray devices. This research aims to model the effects of turbulence occurring inside a cylindrical liquid jet to its atomization process. The two widely used atomization models Kelvin-Helmholtz (KH) instability of Reitz and the Taylor analogy breakup (TAB) of O'Rourke and Amsden portraying primary liquid jet disintegration and secondary droplet breakup, respectively, are examined. Additional terms are formulated and appropriately implemented into these two models to account for the turbulence effect. Results for the flow conditions examined in this study indicate that the turbulence terms are significant in comparison with other terms in the models. In the primary breakup regime, the turbulent liquid jet tends to break up into large drops while its intact core is slightly shorter than those without turbulence. In contrast, the secondary droplet breakup with the inside liquid turbulence consideration produces smaller drops. Computational results indicate that the proposed models provide predictions that agree reasonably well with available measured data.

  8. Atomic Data and the Modeling of Supernova Spectra

    NASA Astrophysics Data System (ADS)

    Fontes, Christopher

    2012-06-01

    The modeling of supernovae (SNe) incorporates a variety of disciplines, including hydrodynamics, radiation transport, nuclear physics and atomic physics. These efforts require numerical simulation of the final stages of a star's life, the supernova explosion phase, and the radiation that is subsequently emitted by the supernova remnant, which can occur over a time span of tens of thousands of years. While there are several different types of SNe, they all emit radiation in some form. The measurement and interpretation of these spectra provide important information about the structure of the exploding star and the supernova engine. In this talk, the role of atomic data is highlighted as it pertains to the modeling of supernova spectra. Recent applications [1,2] involve the Los Alamos OPLIB opacity database, which has been used to provide atomic opacities for modeling supernova plasmas under local thermodynamic equilibrium (LTE) conditions. Ongoing work includes the application of atomic data generated by the Los Alamos suite of atomic physics codes under more complicated, non-LTE conditions [3]. As a specific, recent example, a portion of the x-ray spectrum produced by Tycho's supernova remnant (SN 1572) will be discussed [4].[4pt] [1] C.L. Fryer et al., Astrophys. J. 707, 193 (2009).[0pt] [2] C.L. Fryer et al., Astrophys. J. 725, 296 (2009).[0pt] [3] C.J. Fontes et al., Conference Proceedings for ICPEAC XXVII (Belfast, Northern Ireland), in press, (2011).[0pt] [4] K.A. Eriksen et al., Presentation at the 2012 AAS Meeting (Austin, TX).

  9. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    PubMed

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action.

  10. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    PubMed

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. PMID:27479466

  11. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM. PMID:26737125

  12. Phylogenetic Tree Reconstruction Accuracy and Model Fit when Proportions of Variable Sites Change across the Tree

    PubMed Central

    Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.

    2010-01-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636

  13. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  14. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  15. Active Contours Using Additive Local and Global Intensity Fitting Models for Intensity Inhomogeneous Image Segmentation

    PubMed Central

    Soomro, Shafiullah; Kim, Jeong Heon; Soomro, Toufique Ahmed

    2016-01-01

    This paper introduces an improved region based active contour method with a level set formulation. The proposed energy functional integrates both local and global intensity fitting terms in an additive formulation. Local intensity fitting term influences local force to pull the contour and confine it to object boundaries. In turn, the global intensity fitting term drives the movement of contour at a distance from the object boundaries. The global intensity term is based on the global division algorithm, which can better capture intensity information of an image than Chan-Vese (CV) model. Both local and global terms are mutually assimilated to construct an energy function based on a level set formulation to segment images with intensity inhomogeneity. Experimental results show that the proposed method performs better both qualitatively and quantitatively compared to other state-of-the-art-methods. PMID:27800011

  16. Modelling of the toe trajectory during normal gait using circle-fit approximation.

    PubMed

    Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan

    2016-10-01

    This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.

  17. Fitting the distribution of dry and wet spells with alternative probability models

    NASA Astrophysics Data System (ADS)

    Deni, Sayang Mohd; Jemain, Abdul Aziz

    2009-06-01

    The development of the rainfall occurrence model is greatly important not only for data-generation purposes, but also in providing informative resources for future advancements in water-related sectors, such as water resource management and the hydrological and agricultural sectors. Various kinds of probability models had been introduced to a sequence of dry (wet) days by previous researchers in the field. Based on the probability models developed previously, the present study is aimed to propose three types of mixture distributions, namely, the mixture of two log series distributions (LSD), the mixture of the log series Poisson distribution (MLPD), and the mixture of the log series and geometric distributions (MLGD), as the alternative probability models to describe the distribution of dry (wet) spells in daily rainfall events. In order to test the performance of the proposed new models with the other nine existing probability models, 54 data sets which had been published by several authors were reanalyzed in this study. Also, the new data sets of daily observations from the six selected rainfall stations in Peninsular Malaysia for the period 1975-2004 were used. In determining the best fitting distribution to describe the observed distribution of dry (wet) spells, a Chi-square goodness-of-fit test was considered. The results revealed that the new method proposed that MLGD and MLPD showed a better fit as more than half of the data sets successfully fitted the distribution of dry and wet spells. However, the existing models, such as the truncated negative binomial and the modified LSD, were also among the successful probability models to represent the sequence of dry (wet) days in daily rainfall occurrence.

  18. The Fitness Landscape of HIV-1 Gag: Advanced Modeling Approaches and Validation of Model Predictions by In Vitro Testing

    PubMed Central

    Omarjee, Saleha; Walker, Bruce D.; Chakraborty, Arup; Ndung'u, Thumbi

    2014-01-01

    Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model), generalizing our previous approach (Ising model) that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these) predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = −0.74, p = 3.6×10−6) are strongly correlated, and this was further strengthened in the regularized Ising model (r = −0.83, p = 3.7×10−12). Performance of the Potts model (r = −0.73, p = 9.7×10−9) was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  19. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    PubMed

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm.

  20. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    PubMed

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm. PMID:22624307

  1. Fitting parametric models of diffusion MRI in regions of partial volume

    NASA Astrophysics Data System (ADS)

    Eaton-Rosen, Zach; Cardoso, M. J.; Melbourne, Andrew; Orasanu, Eliza; Bainbridge, Alan; Kendall, Giles S.; Robertson, Nicola J.; Marlow, Neil; Ourselin, Sebastien

    2016-03-01

    Regional analysis is normally done by fitting models per voxel and then averaging over a region, accounting for partial volume (PV) only to some degree. In thin, folded regions such as the cerebral cortex, such methods do not work well, as the partial volume confounds parameter estimation. Instead, we propose to fit the models per region directly with explicit PV modeling. In this work we robustly estimate region-wise parameters whilst explicitly accounting for partial volume effects. We use a high-resolution segmentation from a T1 scan to assign each voxel in the diffusion image a probabilistic membership to each of k tissue classes. We rotate the DW signal at each voxel so that it aligns with the z-axis, then model the signal at each voxel as a linear superposition of a representative signal from each of the k tissue types. Fitting involves optimising these representative signals to best match the data, given the known probabilities of belonging to each tissue type that we obtained from the segmentation. We demonstrate this method improves parameter estimation in digital phantoms for the diffusion tensor (DT) and `Neurite Orientation Dispersion and Density Imaging' (NODDI) models. The method provides accurate parameter estimates even in regions where the normal approach fails completely, for example where partial volume is present in every voxel. Finally, we apply this model to brain data from preterm infants, where the thin, convoluted, maturing cortex necessitates such an approach.

  2. Semirelativistic model for ionization of atomic hydrogen by electron impact

    SciTech Connect

    Attaourti, Y.; Taj, S.; Manaut, B.

    2005-06-15

    We present a semirelativistic model for the description of the ionization process of atomic hydrogen by electron impact in the first Born approximation by using the Darwin wave function to describe the bound state of atomic hydrogen and the Sommerfeld-Maue wave function to describe the ejected electron. This model, accurate to first order in Z/c in the relativistic correction, shows that, even at low kinetic energies of the incident electron, spin effects are small but not negligible. These effects become noticeable with increasing incident electron energies. All analytical calculations are exact and our semirelativistic results are compared with the results obtained in the nonrelativistic Coulomb Born approximation both for the coplanar asymmetric and the binary coplanar geometries.

  3. Empirical model of atomic nitrogen in the upper thermosphere

    NASA Technical Reports Server (NTRS)

    Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.

    1977-01-01

    Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.

  4. Extended Bose-Hubbard models with ultracold magnetic atoms.

    PubMed

    Baier, S; Mark, M J; Petter, D; Aikawa, K; Chomaz, L; Cai, Z; Baranov, M; Zoller, P; Ferlaino, F

    2016-04-01

    The Hubbard model underlies our understanding of strongly correlated materials. Whereas its standard form only comprises interactions between particles at the same lattice site, extending it to encompass long-range interactions is predicted to profoundly alter the quantum behavior of the system. We realize the extended Bose-Hubbard model for an ultracold gas of strongly magnetic erbium atoms in a three-dimensional optical lattice. Controlling the orientation of the atomic dipoles, we reveal the anisotropic character of the onsite interaction and hopping dynamics and their influence on the superfluid-to-Mott insulator quantum phase transition. Moreover, we observe nearest-neighbor interactions, a genuine consequence of the long-range nature of dipolar interactions. Our results lay the groundwork for future studies of exotic many-body quantum phases. PMID:27124454

  5. High precision measurements of atom column positions using model-based exit wave reconstruction.

    PubMed

    De Backer, A; Van Aert, S; Van Dyck, D

    2011-01-01

    In this paper, it has been investigated how to measure atom column positions as accurately and precisely as possible using a focal series of images. In theory, it is expected that the precision would considerably improve using a maximum likelihood estimator based on the full series of focal images. As such, the theoretical lower bound on the variances of the unknown atom column positions can be attained. However, this approach is numerically demanding. Therefore, maximum likelihood estimation has been compared with the results obtained by fitting a model to a reconstructed exit wave rather than to the full series of focal images. Hence, a real space model-based exit wave reconstruction technique based on the channelling theory is introduced. Simulations show that the reconstructed complex exit wave contains the same amount of information concerning the atom column positions as the full series of focal images. Only for thin samples, which act as weak phase objects, this information can be retrieved from the phase of the reconstructed complex exit wave.

  6. Improved cosmological model fitting of Planck data with a dark energy spike

    NASA Astrophysics Data System (ADS)

    Park, Chan-Gyung

    2015-06-01

    The Λ cold dark matter (Λ CDM ) model is currently known as the simplest cosmology model that best describes observations with a minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional Λ CDM one by constructing dark energy as the sum of the cosmological constant Λ and an additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and an early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about 5 times the number of additional parameters introduced and narrower constraints on the matter density and Hubble constant compared with the best-fit Λ CDM model. The significant improvement in reducing the chi square mainly comes from the better fitting of the Planck temperature power spectrum around the third (ℓ≈800 ) and sixth (ℓ≈1800 ) acoustic peaks. The likelihood ratio test and the Akaike information criterion suggest that the model of a dark energy spike is strongly favored by the current cosmological observations over the conventional Λ CDM model. However, based on the Bayesian information criterion which penalizes models with more parameters, the strong evidence supporting the presence of a dark energy spike disappears. Our result emphasizes that the alternative cosmological parameter estimation with even better fitting of the same observational data is allowed in Einstein's gravity.

  7. Charged Neutrinos and Atoms in the Standard Model

    NASA Astrophysics Data System (ADS)

    Takasugi, E.; Tanaka, M.

    1992-03-01

    The possibility of the charge quantization in the standard model is examined in the absence of the ``generation as copies'' rule. It is shown that neutrinos and atoms can have mini-charges, while neutron is neutral. If a triplet Higgs boson is introduced, neutrinos have masses. Two neutrinos form a Konopinski-Mahmoud Dirac particle and the other becomes a Majorana particle due to the hidden local anomaly free U(1) symmetry.

  8. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID

  9. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID

  10. How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Montano, Rosa

    2013-01-01

    We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model (1PL) estimated by (marginal) maximum…

  11. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree–Fock, Thomas–Fermi–Dirac potential, and Dirac–Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit–Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  12. The Blazar 3C 66A in 2003-2004: hadronic versus leptonic model fits

    SciTech Connect

    Reimer, A.

    2008-12-24

    The low-frequency peaked BL Lac object 3C 66A was the subject of an extensive multi-wavelength campaign from July 2003 till April 2004, which included quasi-simultaneous observations at optical, X-rays and very high energy gamma-rays. Here we apply the hadronic Synchrotron-Proton Blazar (SPB) model to the observed spectral energy distribution time-averaged over a flaring state, and compare the resulting model fits to those obtained from the application of the leptonic Synchrotron-Self-Compton (SSC) model. The results are used to identify diagnostic key predictions of the two blazar models for future multi-wavelength observations.

  13. Modeling of Turbulence Effects on Liquid Jet Atomization and Breakup

    NASA Technical Reports Server (NTRS)

    Trinh, Huu P.; Chen, C. P.

    2005-01-01

    Recent experimental investigations and physical modeling studies have indicated that turbulence behaviors within a liquid jet have considerable effects on the atomization process. This study aims to model the turbulence effect in the atomization process of a cylindrical liquid jet. Two widely used models, the Kelvin-Helmholtz (KH) instability of Reitz (blob model) and the Taylor-Analogy-Breakup (TAB) secondary droplet breakup by O Rourke et al, are further extended to include turbulence effects. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic scales and the initial flow conditions. For the secondary breakup, an additional turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. The drop size formed from this breakup regime is estimated based on the energy balance before and after the breakup occurrence. This paper describes theoretical development of the current models, called "T-blob" and "T-TAB", for primary and secondary breakup respectivety. Several assessment studies are also presented in this paper.

  14. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  15. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes.

    PubMed

    Shekhar, Karthik; Ruberman, Claire F; Ferguson, Andrew L; Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses. PMID:24483484

  16. unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance

    USGS Publications Warehouse

    Fiske, Ian J.; Chandler, Richard B.

    2011-01-01

    Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientific questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mechanisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unified modeling interface. The R package unmarked provides such a unified modeling framework, including tools for data exploration, model fitting, model criticism, post-hoc analysis, and model comparison.

  17. The Chocolate Shop and Atomic Orbitals: A New Atomic Model Created by High School Students to Teach Elementary Students

    ERIC Educational Resources Information Center

    Liguori, Lucia

    2014-01-01

    Atomic orbital theory is a difficult subject for many high school and beginning undergraduate students, as it includes mathematical concepts not yet covered in the school curriculum. Moreover, it requires certain ability for abstraction and imagination. A new atomic orbital model "the chocolate shop" created "by" students…

  18. What is the "best" atomic charge model to describe through-space charge-transfer excitations?

    PubMed

    Jacquemin, Denis; Le Bahers, Tangui; Adamo, Carlo; Ciofini, Ilaria

    2012-04-28

    We investigate the efficiency of several partial atomic charge models (Mulliken, Hirshfeld, Bader, Natural, Merz-Kollman and ChelpG) for investigating the through-space charge-transfer in push-pull organic compounds with Time-Dependent Density Functional Theory approaches. The results of these models are compared to benchmark values obtained by determining the difference of total densities between the ground and excited states. Both model push-pull oligomers and two classes of "real-life" organic dyes (indoline and diketopyrrolopyrrole) used as sensitisers in solar cell applications have been considered. Though the difference of dipole moments between the ground and excited states is reproduced by most approaches, no atomic charge model is fully satisfactory for reproducing the distance and amount of charge transferred that are provided by the density picture. Overall, the partitioning schemes fitting the electrostatic potential (e.g. Merz-Kollman) stand as the most consistent compromises in the framework of simulating through-space charge-transfer, whereas the other models tend to yield qualitatively inconsistent values.

  19. A goodness-of-fit test for occupancy models with correlated within-season revisits.

    PubMed

    Wright, Wilson J; Irvine, Kathryn M; Rodhouse, Thomas J

    2016-08-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodness-of-fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie-Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie-Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  20. Optimal circumference reduction of finger models for good prosthetic fit of a thimble-type prosthesis for distal finger amputations.

    PubMed

    Leow, M E; Prosthetist, C; Pho, R W

    2001-01-01

    The prosthetic fit of a thimble-type esthetic silicone prosthesis was retrospectively reviewed in 29 patients who were fitted following distal finger amputations. The aim was to correlate prosthetic fit with the magnitudes of circumference reduction in the finger models used to produce the prostheses and to identify the optimum reduction for the best outcome. A good fit is achieved primarily by making the prosthesis circumferentially smaller than the segment of the residual finger (residuum) over which it "cups". The percentage reduction in circumference of the finger model against the residuum model was calculated by dividing the difference in circumference between the residuum model and the finger model by the residuum model circumference and multiplying the result by 100. The computed percentage circumference reduction in the finger models ranged from small (1-3), moderate (5-7), to large (8-9). Twelve of 15 patients whose finger models had between one to three circumference reductions had a loose prosthetic fit. Only two of 14 patients who had a larger model circumference reduction of between five to nine had loose-fitting prostheses. Two of five patients who had eight to nine model circumference reduction had an uncomfortably tight prosthetic fit. A 5-7% circumference reduction in the finger model was shown in this study to best translate into good fit of a thimble-type prosthesis for distal finger amputations.

  1. Chemical domain of QSAR models from atom-centered fragments.

    PubMed

    Kühne, Ralph; Ebert, Ralf-Uwe; Schüürmann, Gerrit

    2009-12-01

    A methodology to characterize the chemical domain of qualitative and quantitative structure-activity relationship (QSAR) models based on the atom-centered fragment (ACF) approach is introduced. ACFs decompose the molecule into structural pieces, with each non-hydrogen atom of the molecule acting as an ACF center. ACFs vary with respect to their size in terms of the path length covered in each bonding direction starting from a given central atom and how comprehensively the neighbor atoms (including hydrogen) are described in terms of element type and bonding environment. In addition to these different levels of ACF definitions, the ACF match mode as degree of strictness of the ACF comparison between a test compound and a given ACF pool (such as from a training set) has to be specified. Analyses of the prediction statistics of three QSAR models with their training sets as well as with external test sets and associated subsets demonstrate a clear relationship between the prediction performance and the levels of ACF definition and match mode. The findings suggest that second-order ACFs combined with a borderline match mode may serve as a generic and at the same time a mechanistically sound tool to define and evaluate the chemical domain of QSAR models. Moreover, four standard categories of the ACF-based membership to a given chemical domain (outside, borderline outside, borderline inside, inside) are introduced that provide more specific information about the expected QSAR prediction performance. As such, the ACF-based characterization of the chemical domain appears to be particularly useful for QSAR applications in the context of REACH and other regulatory schemes addressing the safety evaluation of chemical compounds.

  2. Agricultural case studies of classification accuracy, spectral resolution, and model over-fitting.

    PubMed

    Nansen, Christian; Geremias, Leandro Delalibera; Xue, Yingen; Huang, Fangneng; Parra, Jose Roberto

    2013-11-01

    This paper describes the relationship between spectral resolution and classification accuracy in analyses of hyperspectral imaging data acquired from crop leaves. The main scope is to discuss and reduce the risk of model over-fitting. Over-fitting of a classification model occurs when too many and/or irrelevant model terms are included (i.e., a large number of spectral bands), and it may lead to low robustness/repeatability when the classification model is applied to independent validation data. We outline a simple way to quantify the level of model over-fitting by comparing the observed classification accuracies with those obtained from explanatory random data. Hyperspectral imaging data were acquired from two crop-insect pest systems: (1) potato psyllid (Bactericera cockerelli) infestations of individual bell pepper plants (Capsicum annuum) with the acquisition of hyperspectral imaging data under controlled-light conditions (data set 1), and (2) sugarcane borer (Diatraea saccharalis) infestations of individual maize plants (Zea mays) with the acquisition of hyperspectral imaging data from the same plants under two markedly different image-acquisition conditions (data sets 2a and b). For each data set, reflectance data were analyzed based on seven spectral resolutions by dividing 160 spectral bands from 405 to 907 nm into 4, 16, 32, 40, 53, 80, or 160 bands. In the two data sets, similar classification results were obtained with spectral resolutions ranging from 3.1 to 12.6 nm. Thus, the size of the initial input data could be reduced fourfold with only a negligible loss of classification accuracy. In the analysis of data set 1, several validation approaches all demonstrated consistently that insect-induced stress could be accurately detected and that therefore there was little indication of model over-fitting. In the analyses of data set 2, inconsistent validation results were obtained and the observed classification accuracy (81.06%) was only a few percentage

  3. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment

    PubMed Central

    Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F.

    2009-01-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database. PMID:20046797

  4. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    PubMed

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database. PMID:20046797

  5. Modeling exact exchange potential in spherically confined atoms.

    PubMed

    Vyboishchikov, Sergei F

    2015-10-15

    In this work, local exchange potentials corresponding to the Hartree-Fock (HF) electron density have been obtained using the Zhao-Morrison-Parr method for a number of closed-shell confined atoms and ions. The exchange potentials obtained and the resulting density were compared with those given by the Becke-Johnson (BJ) model potential. It is demonstrated that introducing a scaling factor to the BJ potential allows improving the quality of the resulting density. The optimum scaling factor increases with decreasing confinement radius. The performance of Karasiev and Ludeña's SCα-LDA method as well as of the Becke-88 exchange potential for reproducing the HF electron densities in confined atoms has been also examined.

  6. Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data.

  7. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies.

  8. Validation of a Best-Fit Pharmacokinetic Model for Scopolamine Disposition after Intranasal Administration

    NASA Technical Reports Server (NTRS)

    Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.

    2015-01-01

    An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.

  9. Effects of new mutations on fitness: insights from models and data.

    PubMed

    Bataillon, Thomas; Bailey, Susan F

    2014-07-01

    The rates and properties of new mutations affecting fitness have implications for a number of outstanding questions in evolutionary biology. Obtaining estimates of mutation rates and effects has historically been challenging, and little theory has been available for predicting the distribution of fitness effects (DFE); however, there have been recent advances on both fronts. Extreme-value theory predicts the DFE of beneficial mutations in well-adapted populations, while phenotypic fitness landscape models make predictions for the DFE of all mutations as a function of the initial level of adaptation and the strength of stabilizing selection on traits underlying fitness. Direct experimental evidence confirms predictions on the DFE of beneficial mutations and favors distributions that are roughly exponential but bounded on the right. A growing number of studies infer the DFE using genomic patterns of polymorphism and divergence, recovering a wide range of DFE. Future work should be aimed at identifying factors driving the observed variation in the DFE. We emphasize the need for further theory explicitly incorporating the effects of partial pleiotropy and heterogeneity in the environment on the expected DFE.

  10. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  11. Model Fit to Experimental Data for Foam-Assisted Deep Vadose Zone Remediation

    SciTech Connect

    Roostapour, A.; Lee, G.; Zhong, Lirong; Kam, Seung I.

    2014-01-15

    Foam has been regarded as a promising means of remeidal amendment delivery to overcome subsurface heterogeneity in subsurface remediation processes. This study investigates how a foam model, developed by Method of Characteristics and fractional flow analysis in the companion paper of Roostapour and Kam (2012), can be applied to make a fit to a set of existing laboratory flow experiments (Zhong et al., 2009) in an application relevant to deep vadose zone remediation. This study reveals a few important insights regarding foam-assisted deep vadose zone remediation: (i) the mathematical framework established for foam modeling can fit typical flow experiments matching wave velocities, saturation history , and pressure responses; (ii) the set of input parameters may not be unique for the fit, and therefore conducting experiments to measure basic model parameters related to relative permeability, initial and residual saturations, surfactant adsorption and so on should not be overlooked; and (iii) gas compressibility plays an important role for data analysis, thus should be handled carefully in laboratory flow experiments. Foam kinetics, causing foam texture to reach its steady-state value slowly, may impose additional complications.

  12. Design and verifications of an eye model fitted with contact lenses for wavefront measurement systems

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Chen, Jia-Hong; Chang, Rong-Jie; Wang, Chung-Yen; Hsu, Wei-Yao; Wang, Pei-Jen

    2015-09-01

    Contact lenses are typically measured by the wet-box method because of the high optical power resulting from the anterior central curvature of cornea, even though the back vertex power of the lenses are small. In this study, an optical measurement system based on the Shack-Hartmann wavefront principle was established to investigate the aberrations of soft contact lenses. Fitting conditions were micmicked to study the optical design of an eye model with various topographical shapes in the anterior cornea. Initially, the contact lenses were measured by the wet-box method, and then by fitting the various topographical shapes of cornea to the eye model. In addition, an optics simulation program was employed to determine the sources of errors and assess the accuracy of the system. Finally, samples of soft contact lenses with various Diopters were measured; and, both simulations and experimental results were compared for resolving the controversies of fitting contact lenses to an eye model for optical measurements. More importantly, the results show that the proposed system can be employed for study of primary aberrations in contact lenses.

  13. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  14. Bohr model and dimensional scaling analysis of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Urtekin, Kerim

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and

  15. Uncertainty Estimation in Fitting Parameterized Models to Solar Flare Hard X-ray Spectra

    NASA Astrophysics Data System (ADS)

    Ireland, Jack; Tolbert, A. K.; Holman, G. D.; Dennis, B. R.; Schwartz, R. A.

    2012-05-01

    We compare four different methods of estimating the uncertainty in fit parameters when fitting models to Ramaty High Energy Solar Spectroscopic Imager (RHESSI) spectral data. Two flare spectra are studied: one from the GOES (Geostationary Operational Environmental Satellite) X1.3 class flare of 19-January-2005, and the other from the X4.8 flare of 23-July-2002. Three of our methods rely on assumptions about the shape of the hyper-surface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, evaluated around the minimum value of the hyper-surface, to generate uncertainty estimates. The fourth method is based on Bayesian data analysis techniques. The four methods give approximately equal uncertainty estimates for the 19-January-2005 model parameters, but give very different uncertainty estimates for the 23-July-2002 model parameters. This is because the assumptions required for the first three methods hold approximately for the 19-January-2005 analysis, but do not hold for the 23-July-2002 analysis. The Bayesian-based method does not require these assumptions, and so can give reliable uncertainty estimates regardless of the shape of the hyper-surface formed by the model fit to the data. We show that for the 23-July-2002 spectrum, there is a 95% probability that the low energy cutoff to the model distribution of emitting flare electrons lies below approximately 40keV, and a 68% probability that it lies in the estimated range 7-36 keV. The most probable flare electron energy flux is approximately 1028.1 erg-1sec-1 with a 68% credible interval estimated at 1028.1-29.1 erg-1sec-1, and a 95% credible interval estimated at 1028.0-30.3 erg-1sec-1. For the 19-January-2005 spectrum, these quantities are more tightly constrained to 105±4 keV and 1027.66±0.01 erg-1sec-1 (68% uncertainties). The reasons for these disparate results are discussed. This work is funded by the NASA Solar and Heliospheric

  16. Role Modeling Attitudes, Physical Activity and Fitness Promoting Behaviors of Prospective Physical Education Specialists and Non-Specialists.

    ERIC Educational Resources Information Center

    Cardinal, Bradley J.; Cardinal, Marita K.

    2002-01-01

    Compared the role modeling attitudes and physical activity and fitness promoting behaviors of undergraduate students majoring in physical education and in elementary education. Student teacher surveys indicated that physical education majors had more positive attitudes toward role modeling physical activity and fitness promoting behaviors and…

  17. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    PubMed

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report. PMID:19851702

  18. Masses of atomic nuclei in the infinite nuclear matter model

    SciTech Connect

    Satpathy, L.; Nayak, R.C.

    1988-07-01

    We present mass excesses of 3481 nuclei in the range 18less than or equal toAless than or equal to267 using the infinite nuclear matter model based on the Hugenholtz-Van Hove theorem. In this model the ground-state energy of a nucleus of asymmetry ..beta.. is considered equivalent to the energy of a perfect sphere made up of the infinite nuclear matter of the same asymmetry plus the residual energy due to shell effects, deformation, etc., called the local energy eta. In this model there are two kinds of parameters: global and local. The five global parameters characterizing the properties of the above sphere are determined by fitting the mass of all nuclei (756) in the recent mass table of Wapstra et al. having error bar less than 30 keV. The local parameters are determined for 25 regions each spanning 8 or 10 A values. The total number of parameters including the five global ones is 238. The root-mean-square deviation for the calculated masses from experiment is 397 keV for the 1572 nuclei used in the least-squares fit. copyright 1988 Academic Press, Inc.

  19. Assessment of Some Atomization Models Used in Spray Calculations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Bulzin, Dan

    2011-01-01

    The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.

  20. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  1. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  2. Fitting a Two-Component Scattering Model to Polarimetric SAR Data from Forests

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    2007-01-01

    Two simple scattering mechanisms are fitted to polarimetric synthetic aperture radar (SAR) observations of forests. The mechanisms are canopy scatter from a reciprocal medium with azimuthal symmetry and a ground scatter term that can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, which is seen through a layer of vertically oriented scatterers. The model is shown to represent the behavior of polarimetric backscatter from a tropical forest and two temperate forest sites by applying it to data from the National Aeronautic and Space Agency/Jet Propulsion Laboratory's Airborne SAR (AIRSAR) system. Scattering contributions from the two basic scattering mechanisms are estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. This model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem. The model is used to develop an understanding of the ground-trunk double-bounce scattering that is present in the data, which is seen to vary considerably as a function of incidence angle. Two parameters in the model fit appear to exhibit sensitivity to vegetation canopy structure, which is worth further exploration. Results from the model fit for the ground scattering term are compared with estimates from a forward model and shown to be in good agreement. The behavior of the scattering from the ground-trunk interaction is consistent with the presence of a pseudo-Brewster angle effect for the air-trunk scattering interface. If the Brewster angle is known, it is possible to directly estimate the real part of the dielectric constant of the trunks, a key variable in forward modeling of backscatter from forests. It is also shown how, with a priori knowledge of the forest height, an estimate for the

  3. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments.

  4. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. PMID:27505294

  5. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  6. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    NASA Astrophysics Data System (ADS)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  7. T Dwarfs Model Fits for Spectral Standards at Low Spectral Resolution

    NASA Astrophysics Data System (ADS)

    Giorla, Paige; Rice, Emily L.; Douglas, Stephanie T.; Mace, Gregory N.; McLean, Ian S.; Martin, Emily C.; Logsdon, Sarah E.

    2015-01-01

    We present model fits to the T dwarf spectral standards which cover spectral types from T0 to T8. For a complete spectral range analysis, we have included a T9 object which is not considered a spectral standard. We have low-resolution (R~120) SpeX Prism spectra and a variety of higher resolution (R~1,000-25,000) spectra for all nine of these objects. The synthetic spectra are from the BT-SETTL 2013 models. We compare the best fit parameters from low resolution spectra to results from the higher resolution fits of prominent spectral type dependent features, where possible. Using the T dwarf standards to calibrate the effective temperature and gravity parameters for each spectral type, we will expand our analysis to a larger, more varied sample, which includes over one hundred field T dwarfs, for which we have a variety of low, medium, and high resolution spectra from the SpeX Prism Library and the NIRSPEC Brown Dwarf Spectroscopic Survey. This sample includes a handful of peculiar and red T dwarfs, for which we explore the causes of their non-normalcy.

  8. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  9. Calculating the parameters of full lightning impulses using model-based curve fitting

    SciTech Connect

    McComb, T.R.; Lagnese, J.E. )

    1991-10-01

    In this paper a brief review is presented of the techniques used for the evaluation of the parameters of high voltage impulses and the problems encountered. The determination of the best smooth curve through oscillations on a high voltage impulse is the major problem limiting the automatic processing of digital records of impulses. Non-linear regression, based on simple models, is applied to the analysis of simulated and experimental data of full lightning impulses. Results of model fitting to four different groups of impulses are presented and compared with some other methods. Plans for the extension of this work are outlined.

  10. Extended-Drude model to fit infrared conductivity cuprate laser-ablated films

    SciTech Connect

    Pessaud, S.; Sousa, D. de . Centre de Recherche sur la Physique des Hautes Temperatures); Lobo, R. ); Gervais, F. . Lab. d'Electrodynamique des Materiaux Avances)

    1998-12-20

    An extended-Drude model, implying a simple form for the self-energy function of the mobile charge-carrier response, has been applied to fitting the infrared and visible reflectivity spectra of simple cuprates. Excellent fits are obtained in a wide spectral range, from 4 meV to 4 eV, with a very restricted number of adjustable parameters. The optical conductivity obtained with this procedure is highly different from the Kramers-Kronig transformation of reflectivity spectra. The same procedure has been applied to characterize the infrared conductivity of multi-target laser-ablated films built via intergrowth of YBa[sub 2]Cu[sub 3]O[sub 7] and MCuO[sub 2] (M = Ca, Sr).

  11. Computer simulation of liquid cesium using embedded atom model

    NASA Astrophysics Data System (ADS)

    Belashchenko, D. K.; Nikitin, N. Yu

    2008-02-01

    The new method is presented for the inventing an embedded atom potential (EAM potential) for liquid metals. This method uses directly the pair correlation function (PCF) of the liquid metal near the melting temperature. Because of the specific analytic form of this EAM potential, the pair term of potential can be calculated using the pair correlation function and, for example, Schommers algorithm. Other parameters of EAM potential may be found using the potential energy, module of compression and pressure at some conditions, mainly near the melting temperature, at very high temperature or in strongly compressed state. We used the simple exponential formula for effective EAM electronic density and a polynomial series for embedding energy. Molecular dynamics method was applied with L. Verlet algorithm. A series of models with 1968 atoms in the basic cube was constructed in temperature interval 323-1923 K. The thermodynamic properties of liquid cesium, structure data and self-diffusion coefficients are calculated. In general, agreement between the model data and known experimental ones is reasonable. The evaluation is given for the critical temperature of cesium models with EAM potential.

  12. Bounds on collapse models from cold-atom experiments

    NASA Astrophysics Data System (ADS)

    Bilardello, Marco; Donadi, Sandro; Vinante, Andrea; Bassi, Angelo

    2016-11-01

    The spontaneous localization mechanism of collapse models induces a Brownian motion in all physical systems. This effect is very weak, but experimental progress in creating ultracold atomic systems can be used to detect it. In this paper, we considered a recent experiment (Kovachy et al., 2015), where an atomic ensemble was cooled down to picokelvins. Any Brownian motion induces an extra increase of the position variance of the gas. We study this effect by solving the dynamical equations for the Continuous Spontaneous Localizations (CSL) model, as well as for its non-Markovian and dissipative extensions. The resulting bounds, with a 95 % of confidence level, are beaten only by measurements of spontaneous X-ray emission and by experiments with cantilever (in the latter case, only for rC ≥ 10-7 m, where rC is one of the two collapse parameters of the CSL model). We show that, contrary to the bounds given by X-ray measurements, non-Markovian effects do not change the bounds, for any reasonable choice of a frequency cutoff in the spectrum of the collapse noise. Therefore the bounds here considered are more robust. We also show that dissipative effects are unimportant for a large spectrum of temperatures of the noise, while for low temperatures the excluded region in the parameter space is the more reduced, the lower the temperature.

  13. The effects of floral mimics and models on each others' fitness

    PubMed Central

    Anderson, Bruce; Johnson, Steven D

    2006-01-01

    Plants that lack floral rewards may nevertheless attract pollinators by mimicking the flowers of rewarding plants. It has been suggested that both mimics and models should suffer reduced fitness when mimics are abundant relative to their models. By manipulating the relative densities of an orchid mimic Disa nivea and its rewarding model Zaluzianskya microsiphon in small experimental patches within a larger population we demonstrated that the mimic does indeed suffer reduced pollination success when locally common relative to its model. Behavioural experiments suggest that this phenomenon results from the tendency of the long-proboscid fly pollinator to avoid visits to neighbouring plants when encountering the mimic. No negative effect of the mimic on the pollination success of the model was detected. We propose that changes in pollinator flight behaviour, rather than pollinator conditioning, are likely to account for negative frequency-dependent reproductive success in deceptive orchids. PMID:16627282

  14. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    SciTech Connect

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  15. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  16. Model fitting of kink waves in the solar atmosphere: Gaussian damping and time-dependence

    NASA Astrophysics Data System (ADS)

    Morton, R. J.; Mooroogen, K.

    2016-09-01

    Aims: Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. Methods: The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show how they can be assessed using various statistical tests. In particular we discuss the Kolmogorov-Smirnoff one and two sample tests, as well as the runs test. We also highlight the importance of including any observational trend line in the model-fitting process. Results: To demonstrate the methodology, an observation of an oscillating coronal loop undergoing standing kink motion is used. The model comparison techniques provide evidence that a Gaussian damping profile provides a better description of the observed wave attenuation than the often used exponential profile. This supports previous analysis from Pascoe et al. (2016, A&A, 585, L6). Further, we use the model comparison to provide evidence of time-dependent wave properties of a kink oscillation, attributing the behaviour to the thermodynamic evolution of the local plasma.

  17. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . PMID:26584470

  18. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    NASA Technical Reports Server (NTRS)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Shoulder injury is one of the most severe risks that have the potential to impair crewmembers' performance and health in long duration space flight. Overall, 64% of crewmembers experience shoulder pain after extra-vehicular training in a space suit, and 14% of symptomatic crewmembers require surgical repair (Williams & Johnson, 2003). Suboptimal suit fit, in particular at the shoulder region, has been identified as one of the predominant risk factors. However, traditional suit fit assessments and laser scans represent only a single person's data, and thus may not be generalized across wide variations of body shapes and poses. The aim of this work is to develop a software tool based on a statistical analysis of a large dataset of crewmember body shapes. This tool can accurately predict the skin deformation and shape variations for any body size and shoulder pose for a target population, from which the geometry can be exported and evaluated against suit models in commercial CAD software. A preliminary software tool was developed by statistically analyzing 150 body shapes matched with body dimension ranges specified in the Human-Systems Integration Requirements of NASA ("baseline model"). Further, the baseline model was incorporated with shoulder joint articulation ("articulation model"), using additional subjects scanned in a variety of shoulder poses across a pre-specified range of motion. Scan data was cleaned and aligned using body landmarks. The skin deformation patterns were dimensionally reduced and the co-variation with shoulder angles was analyzed. A software tool is currently in development and will be presented in the final proceeding. This tool would allow suit engineers to parametrically generate body shapes in strategically targeted anthropometry dimensions and shoulder poses. This would also enable virtual fit assessments, with which the contact volume and clearance between the suit and body surface can be predictively quantified at reduced time and

  19. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  20. Empirical evaluation reveals best fit of a logistic mutation model for human Y-chromosomal microsatellites.

    PubMed

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-12-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father-son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike's information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion. PMID:21968190

  1. Empirical Evaluation Reveals Best Fit of a Logistic Mutation Model for Human Y-Chromosomal Microsatellites

    PubMed Central

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-01-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father–son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike’s information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion. PMID:21968190

  2. MAGNETICALLY AND BARYONICALLY DOMINATED PHOTOSPHERIC GAMMA-RAY BURST MODEL FITS TO FERMI-LAT OBSERVATIONS

    SciTech Connect

    Veres, Peter; Meszaros, Peter; Zhang, Bin-Bin

    2013-02-10

    We consider gamma-ray burst models where the radiation is dominated by a photospheric region providing the MeV Band spectrum, and an external shock region responsible for the GeV radiation via inverse Compton scattering. We parameterize the initial dynamics through an acceleration law {Gamma}{proportional_to}r {sup {mu}}, with {mu} between 1/3 and 1 to represent the range between an extreme magnetically dominated and a baryonically dominated regime, depending also on the magnetic field configuration. We compare these models to several bright Fermi-LAT bursts, and show that both the time-integrated and the time-resolved spectra, where available, can be well described by these models. We discuss the parameters which result from these fits, and discuss the relative merits and shortcomings of the two models.

  3. Sublattice model of atomic scale pairing inhomogeneity in a superconductor

    NASA Astrophysics Data System (ADS)

    Mishra, Vivek; Hirschfeld, P. J.; Barash, Yu. S.

    2008-10-01

    We study a toy model for a superconductor on a bipartite lattice where intrinsic pairing inhomogeneity is produced by two different coupling constants on the sublattices. The simplicity of the model allows for analytical solutions and tests of the consequences of atomic scale variations in pairing interactions, which have been considered recently in the cuprates. We present results for the transition temperature, density of states, and thermodynamics of the system over a phase diagram in the plane of two pairing coupling constants. For coupling constants of alternating sign, a gapless superconducting state is stable. Inhomogeneity is generally found to enhance the critical temperature, and at the same time the superfluid density is remarkably robust; at T=0 , it is suppressed only in the gapless phase.

  4. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    PubMed Central

    Lai, Peter C.; Crasto, Chiquito J.

    2012-01-01

    Olfactory receptors (ORs) are a type of GTP-binding protein-coupled receptor (GPCR). These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level beyond inferences that are drawn merely from static docking. Here we have shown the specific advantages of simulating the dynamic environment associated with OR-odorant interactions. We present a rigorous protocol which ranges from the creation of a computationally derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs. PMID:22563330

  5. Atomic model of the type III secretion system needle.

    PubMed

    Loquet, Antoine; Sgourakis, Nikolaos G; Gupta, Rashmi; Giller, Karin; Riedel, Dietmar; Goosmann, Christian; Griesinger, Christian; Kolbe, Michael; Baker, David; Becker, Stefan; Lange, Adam

    2012-05-20

    Pathogenic bacteria using a type III secretion system (T3SS) to manipulate host cells cause many different infections including Shigella dysentery, typhoid fever, enterohaemorrhagic colitis and bubonic plague. An essential part of the T3SS is a hollow needle-like protein filament through which effector proteins are injected into eukaryotic host cells. Currently, the three-dimensional structure of the needle is unknown because it is not amenable to X-ray crystallography and solution NMR, as a result of its inherent non-crystallinity and insolubility. Cryo-electron microscopy combined with crystal or solution NMR subunit structures has recently provided a powerful hybrid approach for studying supramolecular assemblies, resulting in low-resolution and medium-resolution models. However, such approaches cannot deliver atomic details, especially of the crucial subunit-subunit interfaces, because of the limited cryo-electron microscopic resolution obtained in these studies. Here we report an alternative approach combining recombinant wild-type needle production, solid-state NMR, electron microscopy and Rosetta modelling to reveal the supramolecular interfaces and ultimately the complete atomic structure of the Salmonella typhimurium T3SS needle. We show that the 80-residue subunits form a right-handed helical assembly with roughly 11 subunits per two turns, similar to that of the flagellar filament of S. typhimurium. In contrast to established models of the needle in which the amino terminus of the protein subunit was assumed to be α-helical and positioned inside the needle, our model reveals an extended amino-terminal domain that is positioned on the surface of the needle, while the highly conserved carboxy terminus points towards the lumen.

  6. Model of spacecraft atomic oxygen and solar exposure microenvironments

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Pippin, H. G.

    1993-01-01

    Computer models of environmental conditions in Earth orbit are needed for the following reasons: (1) derivation of material performance parameters from orbital test data, (2) evaluation of spacecraft hardware designs, (3) prediction of material service life, and (4) scheduling spacecraft maintenance. To meet these needs, Boeing has developed programs for modeling atomic oxygen (AO) and solar radiation exposures. The model allows determination of AO and solar ultraviolet (UV) radiation exposures for spacecraft surfaces (1) in arbitrary orientations with respect to the direction of spacecraft motion, (2) overall ranges of solar conditions, and (3) for any mission duration. The models have been successfully applied to prediction of experiment environments on the Long Duration Exposure Facility (LDEF) and for analysis of selected hardware designs for deployment on other spacecraft. The work on these models has been reported at previous LDEF conferences. Since publication of these reports, a revision has been made to the AO calculation for LDEF, and further work has been done on the microenvironments model for solar exposure.

  7. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  8. Mind the Gap! Implications of a Person-Environment Fit Model of Intellectual Disability for Students, Educators, and Schools

    ERIC Educational Resources Information Center

    Thompson, James R.; Wehmeyer, Michael L.; Hughes, Carolyn

    2010-01-01

    A person-environment fit conceptualization of intellectual disability (ID) requires educators to focus on the gap between a student's competencies and the demands of activities and settings in schools. In this article the implications of the person-environment fit conceptual model are considered in regard to instructional benefits, special…

  9. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  10. Improving ranking of models for protein complexes with side chain modeling and atomic potentials.

    PubMed

    Viswanath, Shruthi; Ravikant, D V S; Elber, Ron

    2013-04-01

    An atomically detailed potential for docking pairs of proteins is derived using mathematical programming. A refinement algorithm that builds atomically detailed models of the complex and combines coarse grained and atomic scoring is introduced. The refinement step consists of remodeling the interface side chains of the top scoring decoys from rigid docking followed by a short energy minimization. The refined models are then re-ranked using a combination of coarse grained and atomic potentials. The docking algorithm including the refinement and re-ranking, is compared favorably to other leading docking packages like ZDOCK, Cluspro, and PATCHDOCK, on the ZLAB 3.0 Benchmark and a test set of 30 novel complexes. A detailed analysis shows that coarse grained potentials perform better than atomic potentials for realistic unbound docking (where the exact structures of the individual bound proteins are unknown), probably because atomic potentials are more sensitive to local errors. Nevertheless, the atomic potential captures a different signal from the residue potential and as a result a combination of the two scores provides a significantly better prediction than each of the approaches alone.

  11. SSC Model Fits to Simultaneous Fermi and CAO observations of Bl Lac's

    NASA Astrophysics Data System (ADS)

    Gordon, Tyler; Macomb, Daryl J.; Hand, Jared; Norris, Jay P.; Long, Min

    2016-01-01

    The Challis Astronomical Observatory (CAO) has been surveying a sample of blazar-type AGN since 2010. The CAO blazar sample includes4 3 sources - comprising 30 FSRQs, 15 BL Lacs, one radio galaxy and four unclassified sources - covering a redshift range 0.02 < z < 2. Observations are carried out in BVRI filters. Here we describe photometric results on a small sample emphasizing BL Lacs. We combine the CAO data with Fermi/LAT data and explore the suitability of fits to the data using the uniform conical jet model of Potter and Cotter (MNRAS, 2012, 423, 756-765).

  12. Fitting optimum order of Markov chain models for daily rainfall occurrences in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Deni, Sayang Mohd; Jemain, Abdul Aziz; Ibrahim, Kamarulzaman

    2009-06-01

    The analysis of the daily rainfall occurrence behavior is becoming more important, particularly in water-related sectors. Many studies have identified a more comprehensive pattern of the daily rainfall behavior based on the Markov chain models. One of the aims in fitting the Markov chain models of various orders to the daily rainfall occurrence is to determine the optimum order. In this study, the optimum order of the Markov chain models for a 5-day sequence will be examined in each of the 18 rainfall stations in Peninsular Malaysia, which have been selected based on the availability of the data, using the Akaike’s (AIC) and Bayesian information criteria (BIC). The identification of the most appropriate order in describing the distribution of the wet (dry) spells for each of the rainfall stations is obtained using the Kolmogorov-Smirnov goodness-of-fit test. It is found that the optimum order varies according to the levels of threshold used (e.g., either 0.1 or 10.0 mm), the locations of the region and the types of monsoon seasons. At most stations, the Markov chain models of a higher order are found to be optimum for rainfall occurrence during the northeast monsoon season for both levels of threshold. However, it is generally found that regardless of the monsoon seasons, the first-order model is optimum for the northwestern and eastern regions of the peninsula when the level of thresholds of 10.0 mm is considered. The analysis indicates that the first order of the Markov chain model is found to be most appropriate for describing the distribution of wet spells, whereas the higher-order models are found to be adequate for the dry spells in most of the rainfall stations for both threshold levels and monsoon seasons.

  13. Limited-information Goodness-of-fit Testing of Hierarchical Item Factor Models

    PubMed Central

    Cai, Li; Hansen, Mark

    2013-01-01

    In applications of item response theory, assessment of model fit is a critical issue. Recently, limited-information goodness-of-fit testing has received increased attention in the psychometrics literature. In contrast to full-information test statistics such as Pearson’s X2 or the likelihood ratio G2, these limited-information tests utilise lower order marginal tables rather than the full contingency table. A notable example is Maydeu-Olivares and colleagues’ M2 family of statistics based on univariate and bivariate margins. When the contingency table is sparse, tests based on M2 retain better Type I error rate control than the full-information tests and can be more powerful. While in principle the M2 statistic can be extended to test hierarchical multidimensional item factor models (e.g., bifactor and testlet models), the computation is non-trivial. To obtain M2, a researcher often has to obtain (many thousands of) marginal probabilities, derivatives, and weights. Each of these must be approximated with high-dimensional numerical integration. We propose a dimension reduction method that can take advantage of the hierarchical factor structure so that the integrals can be approximated far more efficiently. We also propose a new test statistic that can be substantially better calibrated and more powerful than the original M2 statistic when the test is long and the items are polytomous. We use simulations to demonstrate the performance of our new methods and illustrate their effectiveness with applications to real data. PMID:22642552

  14. A Pearson-type goodness-of-fit test for stationary and time-continuous Markov regression models.

    PubMed

    Aguirre-Hernández, R; Farewell, V T

    2002-07-15

    Markov regression models describe the way in which a categorical response variable changes over time for subjects with different explanatory variables. Frequently it is difficult to measure the response variable on equally spaced discrete time intervals. Here we propose a Pearson-type goodness-of-fit test for stationary Markov regression models fitted to panel data. A parametric bootstrap algorithm is used to study the distribution of the test statistic. The proposed technique is applied to examine the fit of a Markov regression model used to identify markers for disease progression in psoriatic arthritis.

  15. A fungal growth model fitted to carbon-limited dynamics of Rhizoctonia solani.

    PubMed

    Jeger, M J; Lamour, A; Gilligan, C A; Otten, W

    2008-01-01

    Here, a quasi-steady-state approximation was used to simplify a mathematical model for fungal growth in carbon-limiting systems, and this was fitted to growth dynamics of the soil-borne plant pathogen and saprotroph Rhizoctonia solani. The model identified a criterion for invasion into carbon-limited environments with two characteristics driving fungal growth, namely the carbon decomposition rate and a measure of carbon use efficiency. The dynamics of fungal spread through a population of sites with either low (0.0074 mg) or high (0.016 mg) carbon content were well described by the simplified model with faster colonization for the carbon-rich environment. Rhizoctonia solani responded to a lower carbon availability by increasing the carbon use efficiency and the carbon decomposition rate following colonization. The results are discussed in relation to fungal invasion thresholds in terms of carbon nutrition. PMID:18312538

  16. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  17. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  18. A fungal growth model fitted to carbon-limited dynamics of Rhizoctonia solani.

    PubMed

    Jeger, M J; Lamour, A; Gilligan, C A; Otten, W

    2008-01-01

    Here, a quasi-steady-state approximation was used to simplify a mathematical model for fungal growth in carbon-limiting systems, and this was fitted to growth dynamics of the soil-borne plant pathogen and saprotroph Rhizoctonia solani. The model identified a criterion for invasion into carbon-limited environments with two characteristics driving fungal growth, namely the carbon decomposition rate and a measure of carbon use efficiency. The dynamics of fungal spread through a population of sites with either low (0.0074 mg) or high (0.016 mg) carbon content were well described by the simplified model with faster colonization for the carbon-rich environment. Rhizoctonia solani responded to a lower carbon availability by increasing the carbon use efficiency and the carbon decomposition rate following colonization. The results are discussed in relation to fungal invasion thresholds in terms of carbon nutrition.

  19. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-01-01

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed. PMID:26054728

  20. Fitting response models of benthic community structure to abiotic variables in a polluted estuarine system

    NASA Astrophysics Data System (ADS)

    González-Oreja, José Antonio; Saiz-Salinas, José Ignacio

    1999-07-01

    Models of the macrozoobenthic community responses to abiotic variables measured in the polluted Bilbao estuary were obtained by multiple linear regression analyses. Total, Oligochaeta and Nematoda abundance and biomass were considered as dependent variables. Intertidal level, dissolved oxygen at the bottom of the water column (DOXB) and organic content of the sediment were selected by the analyses as the three principal explanatory variables. Goodness-of-fit of the models was high ( overlinex=71.3% ). Total abundance and biomass increased as a linear function of DOXB. The principal outcome of the vast sewage scheme currently in progress in the study area is an important contributor of increasing DOXB levels. The models exposed in this paper will serve as a tool to evaluate the expected changes in the near future.

  1. Fitting mathematical models to describe the rheological behaviour of chocolate pastes

    NASA Astrophysics Data System (ADS)

    Barbosa, Carla; Diogo, Filipa; Alves, M. Rui

    2016-06-01

    The flow behavior is of utmost importance for the chocolate industry. The objective of this work was to study two mathematical models, Casson and Windhab models that can be used to fit chocolate rheological data and evaluate which better infers or previews the rheological behaviour of different chocolate pastes. Rheological properties (viscosity, shear stress and shear rates) were obtained with a rotational viscometer equipped with a concentric cylinder. The chocolate samples were white chocolate and chocolate with varying percentages in cacao (55%, 70% and 83%). The results showed that the Windhab model was the best to describe the flow behaviour of all the studied samples with higher determination coefficients (r2 > 0.9).

  2. Goodness-of-Fit Tests and Model Diagnostics for Negative Binomial Regression of RNA Sequencing Data

    PubMed Central

    Mi, Gu; Di, Yanming; Schafer, Daniel W.

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models. PMID:25787144

  3. A diffusion process to model generalized von Bertalanffy growth patterns: fitting to real data.

    PubMed

    Román-Román, Patricia; Romero, Desirée; Torres-Ruiz, Francisco

    2010-03-01

    The von Bertalanffy growth curve has been commonly used for modeling animal growth (particularly fish). Both deterministic and stochastic models exist in association with this curve, the latter allowing for the inclusion of fluctuations or disturbances that might exist in the system under consideration which are not always quantifiable or may even be unknown. This curve is mainly used for modeling the length variable whereas a generalized version, including a new parameter b > or = 1, allows for modeling both length and weight for some animal species in both isometric (b = 3) and allometric (b not = 3) situations. In this paper a stochastic model related to the generalized von Bertalanffy growth curve is proposed. This model allows to investigate the time evolution of growth variables associated both with individual behaviors and mean population behavior. Also, with the purpose of fitting the above-mentioned model to real data and so be able to forecast and analyze particular characteristics, we study the maximum likelihood estimation of the parameters of the model. In addition, and regarding the numerical problems posed by solving the likelihood equations, a strategy is developed for obtaining initial solutions for the usual numerical procedures. Such strategy is validated by means of simulated examples. Finally, an application to real data of mean weight of swordfish is presented.

  4. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    SciTech Connect

    Howarth, Richard J.

    2001-12-15

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1{sup o} meridian arc, and the length of a pendulum beating seconds, as a function of sin{sup 2}(latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had

  5. Fitting multilevel models in complex survey data with design weights: Recommendations

    PubMed Central

    2009-01-01

    Background Multilevel models (MLM) offer complex survey data analysts a unique approach to understanding individual and contextual determinants of public health. However, little summarized guidance exists with regard to fitting MLM in complex survey data with design weights. Simulation work suggests that analysts should scale design weights using two methods and fit the MLM using unweighted and scaled-weighted data. This article examines the performance of scaled-weighted and unweighted analyses across a variety of MLM and software programs. Methods Using data from the 2005–2006 National Survey of Children with Special Health Care Needs (NS-CSHCN: n = 40,723) that collected data from children clustered within states, I examine the performance of scaling methods across outcome type (categorical vs. continuous), model type (level-1, level-2, or combined), and software (Mplus, MLwiN, and GLLAMM). Results Scaled weighted estimates and standard errors differed slightly from unweighted analyses, agreeing more with each other than with unweighted analyses. However, observed differences were minimal and did not lead to different inferential conclusions. Likewise, results demonstrated minimal differences across software programs, increasing confidence in results and inferential conclusions independent of software choice. Conclusion If including design weights in MLM, analysts should scale the weights and use software that properly includes the scaled weights in the estimation. PMID:19602263

  6. Modeling the Time Evolution of QSH Equilibria in MST Plasmas Using V3FIT

    NASA Astrophysics Data System (ADS)

    Boguski, J.; Nornberg, M.; Munaretto, S.; Chapman, B. E.; Cianciosa, M.; Terry, P. W.; Hanson, J.

    2015-11-01

    High current and low density RFP plasmas tend towards a 3D configuration, called Quasi-Single Helicity (QSH), characterized by a dominant core helical mode. V3FIT utilizes multiple internal and edge diagnostics to reconstruct the non-axisymmetric magnetic equilibrium of the QSH state. Performing multiple reconstructions at different stages in the QSH cycle is used to learn about the time dynamics of the QSH state. Recent work on modeling a shear-suppression mechanism for QSH formation has produced a predator-prey model of the time dynamics that reproduces the observed behavior, in particular the increased persistence of the QSH state with increased plasma current. Either magnetic or flow shear can facilitate QSH formation. The magnetic shear dependence of QSH is analyzed using V3FIT reconstructions of magnetic equilibrium constrained by internal measurements of density and temperature as well as soft x-ray emission. Fluctuations in the flux surface structure are compared against the measured temperature and density fluctuations and the reconstructed temperature and density profiles are examined to look for evidence of barriers to particle and heat transport. This material is based upon work supported by the U.S. DOE.

  7. Travelling wave expansion: a model fitting approach to the inverse problem of elasticity reconstruction.

    PubMed

    Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph

    2011-08-01

    In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques. PMID:21813354

  8. Total Force Fitness in units part 1: military demand-resource model.

    PubMed

    Bates, Mark J; Fallesen, Jon J; Huey, Wesley S; Packard, Gary A; Ryan, Diane M; Burke, C Shawn; Smith, David G; Watola, Daniel J; Pinder, Evette D; Yosick, Todd M; Estrada, Armando X; Crepeau, Loring; Bowles, Stephen V

    2013-11-01

    The military unit is a critical center of gravity in the military's efforts to enhance resilience and the health of the force. The purpose of this article is to augment the military's Total Force Fitness (TFF) guidance with a framework of TFF in units. The framework is based on a Military Demand-Resource model that highlights the dynamic interactions across demands, resources, and outcomes. A joint team of subject-matter experts identified key variables representing unit fitness demands, resources, and outcomes. The resulting framework informs and supports leaders, support agencies, and enterprise efforts to strengthen TFF in units by (1) identifying TFF unit variables aligned with current evidence and operational practices, (2) standardizing communication about TFF in units across the Department of Defense enterprise in a variety of military organizational contexts, (3) improving current resources including evidence-based actions for leaders, (4) identifying and addressing of gaps, and (5) directing future research for enhancing TFF in units. These goals are intended to inform and enhance Service efforts to develop Service-specific TFF models, as well as provide the conceptual foundation for a follow-on article about TFF metrics for units.

  9. Travelling wave expansion: a model fitting approach to the inverse problem of elasticity reconstruction.

    PubMed

    Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph

    2011-08-01

    In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques.

  10. Lévy Flights and Self-Similar Exploratory Behaviour of Termite Workers: Beyond Model Fitting

    PubMed Central

    Miramontes, Octavio; DeSouza, Og; Paiva, Leticia Ribeiro; Marins, Alessandra; Orozco, Sirio

    2014-01-01

    Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties –including Lévy flights– in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale. PMID:25353958

  11. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  12. Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.

    PubMed

    Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis

    2006-02-01

    Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation. PMID:16705973

  13. Independent-particle models for light negative atomic ions

    NASA Technical Reports Server (NTRS)

    Ganas, P. S.; Talman, J. D.; Green, A. E. S.

    1980-01-01

    For the purposes of astrophysical, aeronomical, and laboratory application, a precise independent-particle model for electrons in negative atomic ions of the second and third period is discussed. The optimum-potential model (OPM) of Talman et al. (1979) is first used to generate numerical potentials for eight of these ions. Results for total energies and electron affinities are found to be very close to Hartree-Fock solutions. However, the OPM and HF electron affinities both depart significantly from experimental affinities. For this reason, two analytic potentials are developed whose inner energy levels are very close to the OPM and HF levels but whose last electron eigenvalues are adjusted precisely with the magnitudes of experimental affinities. These models are: (1) a four-parameter analytic characterization of the OPM potential and (2) a two-parameter potential model of the Green, Sellin, Zachor type. The system O(-) or e-O, which is important in upper atmospheric physics is examined in some detail.

  14. Atomic scale modelling of hexagonal structured metallic fission product alloys.

    PubMed

    Middleburgh, S C; King, D M; Lumpkin, G R

    2015-04-01

    Noble metal particles in the Mo-Pd-Rh-Ru-Tc system have been simulated on the atomic scale using density functional theory techniques for the first time. The composition and behaviour of the epsilon phases are consistent with high-entropy alloys (or multi-principal component alloys)-making the epsilon phase the only hexagonally close packed high-entropy alloy currently described. Configurational entropy effects were considered to predict the stability of the alloys with increasing temperatures. The variation of Mo content was modelled to understand the change in alloy structure and behaviour with fuel burnup (Mo molar content decreases in these alloys as burnup increases). The predicted structures compare extremely well with experimentally ascertained values. Vacancy formation energies and the behaviour of extrinsic defects (including iodine and xenon) in the epsilon phase were also investigated to further understand the impact that the metallic precipitates have on fuel performance.

  15. Simulating and Modeling Transport Through Atomically Thin Membranes

    NASA Astrophysics Data System (ADS)

    Ostrowski, Joseph; Eaves, Joel

    2014-03-01

    The world is running out of clean portable water. The efficacy of water desalination technologies using porous materials is a balance between membrane selectivity and solute throughput. These properties are just starting to be understood on the nanoscale, but in the limit of atomically thin membranes it is unclear whether one can apply typical continuous time random walk models. Depending on the size of the pore and thickness of the membrane, mass transport can range from single stochastic passage events to continuous flow describable by the usual hydrodynamic equations. We present a study of mass transport through membranes of various pore geometries using reverse nonequilibrium simulations, and analyze transport rates using stochastic master equations.

  16. SLIMP: Strong laser interaction model package for atoms and molecules

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Zhao, Zengxiu

    2015-07-01

    We present the SLIMP package, which provides an efficient way for the calculation of strong-field ionization rate and high-order harmonic spectra based on the single active electron approximation. The initial states are taken as single-particle orbitals directly from output files of the general purpose quantum chemistry programs GAMESS, Firefly and Gaussian. For ionization, the molecular Ammosov-Delone-Krainov theory, and both the length gauge and velocity gauge Keldysh-Faisal-Reiss theories are implemented, while the Lewenstein model is used for harmonic spectra. Furthermore, it is also efficient for the evaluation of orbital coordinates wavefunction, momentum wavefunction, orbital dipole moment and calculation of orbital integrations. This package can be applied to quite large basis sets and complex molecules with many atoms, and is implemented to allow easy extensions for additional capabilities.

  17. Atomic scale modelling of hexagonal structured metallic fission product alloys

    PubMed Central

    Middleburgh, S. C.; King, D. M.; Lumpkin, G. R.

    2015-01-01

    Noble metal particles in the Mo-Pd-Rh-Ru-Tc system have been simulated on the atomic scale using density functional theory techniques for the first time. The composition and behaviour of the epsilon phases are consistent with high-entropy alloys (or multi-principal component alloys)—making the epsilon phase the only hexagonally close packed high-entropy alloy currently described. Configurational entropy effects were considered to predict the stability of the alloys with increasing temperatures. The variation of Mo content was modelled to understand the change in alloy structure and behaviour with fuel burnup (Mo molar content decreases in these alloys as burnup increases). The predicted structures compare extremely well with experimentally ascertained values. Vacancy formation energies and the behaviour of extrinsic defects (including iodine and xenon) in the epsilon phase were also investigated to further understand the impact that the metallic precipitates have on fuel performance. PMID:26064629

  18. Atomic-level models of the bacterial carboxysome shell

    SciTech Connect

    Tanaka, S.; Kerfeld, C.A.; Sawaya, M.R.; Cai, F.; Heinhorst, S.; Cannon, G.C.; Yeates, T.O.

    2008-06-03

    The carboxysome is a bacterial microcompartment that functions as a simple organelle by sequestering enzymes involved in carbon fixation. The carboxysome shell is roughly 800 to 1400 angstroms in diameter and is assembled from several thousand protein subunits. Previous studies have revealed the three-dimensional structures of hexameric carboxysome shell proteins, which self-assemble into molecular layers that most likely constitute the facets of the polyhedral shell. Here, we report the three-dimensional structures of two proteins of previously unknown function, CcmL and OrfA (or CsoS4A), from the two known classes of carboxysomes, at resolutions of 2.4 and 2.15 angstroms. Both proteins assemble to form pentameric structures whose size and shape are compatible with formation of vertices in an icosahedral shell. Combining these pentamers with the hexamers previously elucidated gives two plausible, preliminary atomic models for the carboxysome shell.

  19. A healthy fear of the unknown: perspectives on the interpretation of parameter fits from computational models in neuroscience.

    PubMed

    Nassar, Matthew R; Gold, Joshua I

    2013-04-01

    Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.

  20. The Challenges of Fitting an Item Response Theory Model to the Social Anhedonia Scale

    PubMed Central

    Reise, Steven P.; Horan, William P.; Blanchard, Jack J.

    2011-01-01

    This study explored the application of latent variable measurement models to the Social Anhedonia Scale (SAS; Eckblad, Chapman, Chapman, & Mishlove, 1982), a widely used and influential measure in schizophrenia-related research. Specifically, we applied unidimensional and bifactor item response theory (IRT) models to data from a community sample of young adults (n = 2,227). Ordinal factor analyses revealed that identifying a coherent latent structure in the 40-item SAS data was challenging due to: a) the presence of multiple small content clusters (e.g., doublets), b) modest relations between those clusters which, in turn, implies a general factor of only modest strength, c) items that shared little variance with the majority of items, and d) cross-loadings in bifactor solutions. Consequently, we conclude that SAS responses cannot be modeled accurately by either unidimensional or bifactor IRT models. Although the application of a bifactor model to a reduced 17-item set met with better success, significant psychometric and substantive problems remained. Results highlight the challenges of applying latent variable models to scales there were not originally designed to fit these models. PMID:21516580

  1. Observations from using models to fit the gas production of varying volume test cells and landfills.

    PubMed

    Lamborn, Julia

    2012-12-01

    Landfill operators are looking for more accurate models to predict waste degradation and landfill gas production. The simple microbial growth and decay models, whilst being easy to use, have been shown to be inaccurate. Many of the newer and more complex (component) models are highly parameter hungry and many of the required parameters have not been collected or measured at full-scale landfills. This paper compares the results of using different models (LANDGEM, HBM, and two Monod models developed by the author) to fit the gas production of laboratory scale, field test cell and full-scale landfills and discusses some observations that can be made regarding the scalability of gas generation rates. The comparison of these results show that the fast degradation rate that occurs at laboratory scale is not replicated at field-test cell and full-scale landfills. At small scale, all the models predict a slower rate of gas generation than actually occurs. At field test cell and full-scale a number of models predict a faster gas generation than actually occurs. Areas for future work have been identified, which include investigations into the capture efficiency of gas extraction systems and into the parameter sensitivity and identification of the critical parameters for field-test cell and full-scale landfill predication.

  2. The challenges of fitting an item response theory model to the Social Anhedonia Scale.

    PubMed

    Reise, Steven P; Horan, William P; Blanchard, Jack J

    2011-05-01

    This study explored the application of latent variable measurement models to the Social Anhedonia Scale (SAS; Eckblad, Chapman, Chapman, & Mishlove, 1982), a widely used and influential measure in schizophrenia-related research. Specifically, we applied unidimensional and bifactor item response theory (IRT) models to data from a community sample of young adults (n = 2,227). Ordinal factor analyses revealed that identifying a coherent latent structure in the 40-item SAS data was challenging due to (a) the presence of multiple small content clusters (e.g., doublets); (b) modest relations between those clusters, which, in turn, implies a general factor of only modest strength; (c) items that shared little variance with the majority of items; and (d) cross-loadings in bifactor solutions. Consequently, we conclude that SAS responses cannot be modeled accurately by either unidimensional or bifactor IRT models. Although the application of a bifactor model to a reduced 17-item set met with better success, significant psychometric and substantive problems remained. Results highlight the challenges of applying latent variable models to scales that were not originally designed to fit these models.

  3. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    PubMed

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers.

  4. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging

    PubMed Central

    Alipoor, Mohammad; Maier, Stephan E.; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters. PMID:26839880

  5. Optimal Experiment Design for Monoexponential Model Fitting: Application to Apparent Diffusion Coefficient Imaging.

    PubMed

    Alipoor, Mohammad; Maier, Stephan E; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik

    2015-01-01

    The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.

  6. Atomic-scale simulations of atomic and molecular mobility in models of interstellar ice

    NASA Astrophysics Data System (ADS)

    Andersson, Stefan

    The mobility of atoms and molecular radicals at ice-covered dust particles controls the surprisingly rich chemistry of circumstellar and interstellar environments, where a large number of different organic molecules have been observed. Both thermal and non-thermal processes, for instance caused by UV radiation, have been inferred to play important roles in this chemistry. A growing number of experimental studies support previously suggested mechanisms and add to the understanding of possible astrochemical processes. Simulations, of both experiments and astrophysical environments, aid in interpreting experiments and suggesting important mechanisms. Still, the exact mechanisms behind the mobility of species in interstellar ice are far from fully understood. We have performed calculations at the molecular level on the mobility of H atoms and OH radicals at water ice surfaces of varying morphology. Calculations of binding energies and diffusion barriers of H atoms at crystalline and amorphous ice surfaces show that the experimentally observed slower diffusion at amorphous ice is due to considerably stronger binding energies and higher diffusion barriers than at crystalline ice. These results are in excellent agreement with recent experiments. It was also found that quantum tunneling is important for H atom mobility below 10 K. The binding energies and diffusion barriers of OH radicals at crystalline ice have been studied using the ONIOM(QM:AMOEBA) approach. Results indicate that OH diffusion over crystalline ice, contrary to the case of H atoms, might be slower at crystalline ice than at amorphous ice, due to a higher surface density of stronger binding sites at crystalline ice. We have also performed molecular dynamics simulations of the photoexcitation of vapor-deposited water at a range of surface temperatures. These results support that the experimentally observed desorption of H atoms following UV excitation is best explained by release of H atoms from

  7. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  8. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles.

    PubMed

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-03-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  9. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  10. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  11. Spectral observations of Ellerman bombs and fitting with a two-cloud model

    SciTech Connect

    Hong, Jie; Ding, M. D.; Li, Ying; Fang, Cheng; Cao, Wenda

    2014-09-01

    We study the Hα and Ca II 8542 Å line spectra of four typical Ellerman bombs (EBs) in the active region NOAA 11765 on 2013 June 6, observed with the Fast Imaging Solar Spectrograph installed at the 1.6 m New Solar Telescope at Big Bear Solar Observatory. Considering that EBs may occur in a restricted region in the lower atmosphere, and that their spectral lines show particular features, we propose a two-cloud model to fit the observed line profiles. The lower cloud can account for the wing emission, and the upper cloud is mainly responsible for the absorption at line center. After choosing carefully the free parameters, we get satisfactory fitting results. As expected, the lower cloud shows an increase of the source function, corresponding to a temperature increase of 400-1000 K in EBs relative to the quiet Sun. This is consistent with previous results deduced from semi-empirical models and confirms that local heating occurs in the lower atmosphere during the appearance of EBs. We also find that the optical depths can increase to some extent in both the lower and upper clouds, which may result from either direct heating in the lower cloud, or illumination by an enhanced radiation on the upper cloud. The velocities derived from this method, however, are different from those obtained using the traditional bisector method, implying that one should be cautious when interpreting this parameter. The two-cloud model can thus be used as an efficient method to deduce the basic physical parameters of EBs.

  12. Quantitative modeling of virus evolutionary dynamics and adaptation in serial passages using empirically inferred fitness landscapes.

    PubMed

    Woo, Hyung Jun; Reifman, Jaques

    2014-01-01

    We describe a stochastic virus evolution model representing genomic diversification and within-host selection during experimental serial passages under cell culture or live-host conditions. The model incorporates realistic descriptions of the virus genotypes in nucleotide and amino acid sequence spaces, as well as their diversification from error-prone replications. It quantitatively considers factors such as target cell number, bottleneck size, passage period, infection and cell death rates, and the replication rate of different genotypes, allowing for systematic examinations of how their changes affect the evolutionary dynamics of viruses during passages. The relative probability for a viral population to achieve adaptation under a new host environment, quantified by the rate with which a target sequence frequency rises above 50%, was found to be most sensitive to factors related to sequence structure (distance from the wild type to the target) and selection strength (host cell number and bottleneck size). For parameter values representative of RNA viruses, the likelihood of observing adaptations during passages became negligible as the required number of mutations rose above two amino acid sites. We modeled the specific adaptation process of influenza A H5N1 viruses in mammalian hosts by simulating the evolutionary dynamics of H5 strains under the fitness landscape inferred from multiple sequence alignments of H3 proteins. In light of comparisons with experimental findings, we observed that the evolutionary dynamics of adaptation is strongly affected not only by the tendency toward increasing fitness values but also by the accessibility of pathways between genotypes constrained by the genetic code.

  13. ANALYTICAL LIGHT CURVE MODELS OF SUPERLUMINOUS SUPERNOVAE: {chi}{sup 2}-MINIMIZATION OF PARAMETER FITS

    SciTech Connect

    Chatzopoulos, E.; Wheeler, J. Craig; Vinko, J.; Horvath, Z. L.; Nagy, A.

    2013-08-10

    We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including {sup 56}Ni and {sup 56}Co radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich superluminous supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx, and SN 2010kd, as well as to the interacting SN 2008iy and PTF 09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to the LCs of most of the events we examine. We discuss the possibility that collision of supernova ejecta with hydrogen-deficient CSM accounts for some of the hydrogen-deficient SLSNe (SLSN-I) and may be a plausible explanation for the explosion mechanism of SN 2007bi, the pair-instability supernova candidate. We characterize and discuss issues of parameter degeneracy.

  14. Statistics of dark matter substructure - I. Model and universal fitting functions

    NASA Astrophysics Data System (ADS)

    Jiang, Fangzhou; van den Bosch, Frank C.

    2016-05-01

    We present a new, semi-analytical model describing the evolution of dark matter subhaloes. The model uses merger trees constructed using the method of Parkinson et al. to describe the masses and redshifts of subhaloes at accretion, which are subsequently evolved using a simple model for the orbit-averaged mass-loss rates. The model is extremely fast, treats subhaloes of all orders, accounts for scatter in orbital properties and halo concentrations, uses a simple recipe to convert subhalo mass to maximum circular velocity, and considers subhalo disruption. The model is calibrated to accurately reproduce the average subhalo mass and velocity functions in numerical simulations. We demonstrate that, on average, the mass fraction in subhaloes is tightly correlated with the `dynamical age' of the host halo, defined as the number of halo dynamical times that have elapsed since its formation. Using this relation, we present universal fitting functions for the evolved and unevolved subhalo mass and velocity functions that are valid for a broad range in host halo mass, redshift and Λ cold dark matter cosmology.

  15. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  16. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  17. Ploidy frequencies in plants with ploidy heterogeneity: fitting a general gametic model to empirical population data

    PubMed Central

    Suda, Jan; Herben, Tomáš

    2013-01-01

    Genome duplication (polyploidy) is a recurrent evolutionary process in plants, often conferring instant reproductive isolation and thus potentially leading to speciation. Outcome of the process is often seen in the field as different cytotypes co-occur in many plant populations. Failure of meiotic reduction during gametogenesis is widely acknowledged to be the main mode of polyploid formation. To get insight into its role in the dynamics of polyploidy generation under natural conditions, and coexistence of several ploidy levels, we developed a general gametic model for diploid–polyploid systems. This model predicts equilibrium ploidy frequencies as functions of several parameters, namely the unreduced gamete proportions and fertilities of higher ploidy plants. We used data on field ploidy frequencies for 39 presumably autopolyploid plant species/populations to infer numerical values of the model parameters (either analytically or using an optimization procedure). With the exception of a few species, the model fit was very high. The estimated proportions of unreduced gametes (median of 0.0089) matched published estimates well. Our results imply that conditions for cytotype coexistence in natural populations are likely to be less restrictive than previously assumed. In addition, rather simple models show sufficiently rich behaviour to explain the prevalence of polyploids among flowering plants. PMID:23193129

  18. Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach

    SciTech Connect

    Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.

    2014-01-21

    The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq{sub 3} based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.

  19. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  20. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    PubMed

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130

  1. A new fit-for-purpose model testing framework: Decision Crash Tests

    NASA Astrophysics Data System (ADS)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  2. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters.

    PubMed

    Hui, Kerwin; Chai, Jeng-Da

    2016-01-28

    By incorporating the nonempirical strongly constrained and appropriately normed (SCAN) semilocal density functional [J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction problems and noncovalent interactions. In particular, SCAN0-2, which includes about 79% of Hartree-Fock exchange and 50% of second-order Møller-Plesset correlation, is shown to be reliably accurate for a very diverse range of applications, such as thermochemistry, kinetics, noncovalent interactions, and self-interaction problems. PMID:26827209

  3. Strain estimation in 3D by fitting linear and planar data to the March model

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Talbot, Christopher J.

    2016-08-01

    The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.

  4. A 3D boundary-fitted barotropic hydrodynamic model for the New York Harbor region

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, S.

    2005-11-01

    A three-dimensional barotropic hydrodynamic model application to the New York Harbor Region is performed using the Boundary-Fitted HYDROdynamic model (BFHYDRO). The model forcing functions consist of surface elevations along the open boundaries, hourly winds, and fresh water flows from the rivers and sewage flows. A comprehensive skill assessment of the model predictions is done using observed surface elevations and three-dimensional currents. The model-predicted surface elevations compare well with the observed surface elevations at four stations. Mean errors in the model-predicted surface elevations are less than 4% and correlation coefficients exceed 0.985. Model-predicted three-dimensional currents at Verrazano Narrows show excellent comparison with the observations, with mean errors less than 11% and correlation coefficients exceeding 0.960. Model-predicted three-dimensional currents at Bergen Point compare well with the observations, with mean errors less than 15% and correlation coefficients exceeding 0.897. The surface elevation amplitudes and phases of the principal tidal constituents at nine tidal stations, obtained from a harmonic analysis of a 60-day simulation compare well with the observed data. The predicted amplitude and phase of the M2 tidal constituent at these stations are, respectively, within 5 cm and 6° of the observed data. The model-predicted tidal ellipse parameters for the major tidal constituents compare well with the observations at Verrazano Narrows and Bergen Point. The model-predicted along channel sub-tidal currents also compare well with the observations. The semi-diurnal tidal ranges and spring and neap tidal cycles of the surface elevations and currents are well reproduced in the model at all stations. The observed currents at Bergen Point were shown to be flood dominant through tidal distortion analysis. The model-predicted currents also showed Newark Bay and Arthur Kill to be flood dominant systems. The model predictions showed

  5. Secondary Students' Mental Models of Atoms and Molecules: Implications for Teaching Chemistry.

    ERIC Educational Resources Information Center

    Harrison, Allan G.; Treagust, David F.

    1996-01-01

    Examines the reasoning behind views of atoms and molecules held by students (n=48) and investigates how mental models may assist or hamper further instruction in chemistry. Reports that students prefer models of atoms and molecules that depict them as discrete, concrete structures. Recommends that teachers develop student modeling skills and…

  6. Molecular mechanisms of protein aggregation from global fitting of kinetic models.

    PubMed

    Meisl, Georg; Kirkegaard, Julius B; Arosio, Paolo; Michaels, Thomas C T; Vendruscolo, Michele; Dobson, Christopher M; Linse, Sara; Knowles, Tuomas P J

    2016-02-01

    The elucidation of the molecular mechanisms by which soluble proteins convert into their amyloid forms is a fundamental prerequisite for understanding and controlling disorders that are linked to protein aggregation, such as Alzheimer's and Parkinson's diseases. However, because of the complexity associated with aggregation reaction networks, the analysis of kinetic data of protein aggregation to obtain the underlying mechanisms represents a complex task. Here we describe a framework, using quantitative kinetic assays and global fitting, to determine and to verify a molecular mechanism for aggregation reactions that is compatible with experimental kinetic data. We implement this approach in a web-based software, AmyloFit. Our procedure starts from the results of kinetic experiments that measure the concentration of aggregate mass as a function of time. We illustrate the approach with results from the aggregation of the β-amyloid (Aβ) peptides measured using thioflavin T, but the method is suitable for data from any similar kinetic experiment measuring the accumulation of aggregate mass as a function of time; the input data are in the form of a tab-separated text file. We also outline general experimental strategies and practical considerations for obtaining kinetic data of sufficient quality to draw detailed mechanistic conclusions, and the procedure starts with instructions for extensive data quality control. For the core part of the analysis, we provide an online platform (http://www.amylofit.ch.cam.ac.uk) that enables robust global analysis of kinetic data without the need for extensive programming or detailed mathematical knowledge. The software automates repetitive tasks and guides users through the key steps of kinetic analysis: determination of constraints to be placed on the aggregation mechanism based on the concentration dependence of the aggregation reaction, choosing from several fundamental models describing assembly into linear aggregates and

  7. A Cautionary Note on Using G[squared](dif) to Assess Relative Model Fit in Categorical Data Analysis

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert; Cai, Li

    2006-01-01

    The likelihood ratio test statistic G[squared](dif) is widely used for comparing the fit of nested models in categorical data analysis. In large samples, this statistic is distributed as a chi-square with degrees of freedom equal to the difference in degrees of freedom between the tested models, but only if the least restrictive model is correctly…

  8. Applying the Bollen-Stine Bootstrap for Goodness-of-Fit Measures to Structural Equation Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2002-01-01

    Proposed a method for extending the Bollen-Stine bootstrap model (K. Bollen and R. Stine, 1992) fit to structural equation models with missing data. Developed a Statistical Analysis System macro program to implement this procedure, and assessed its usefulness in a simulation. The new method yielded model rejection rates close to the nominal 5%…

  9. On the Model-Based Bootstrap with Missing Data: Obtaining a "P"-Value for a Test of Exact Fit

    ERIC Educational Resources Information Center

    Savalei, Victoria; Yuan, Ke-Hai

    2009-01-01

    Evaluating the fit of a structural equation model via bootstrap requires a transformation of the data so that the null hypothesis holds exactly in the sample. For complete data, such a transformation was proposed by Beran and Srivastava (1985) for general covariance structure models and applied to structural equation modeling by Bollen and Stine…

  10. A Technique for Estimating Distinctive Asperity Source Models by Waveform Fitting

    NASA Astrophysics Data System (ADS)

    Matsushima, S.; Kawase, H.; Sato, T.; Graves, R. W.

    2001-12-01

    For predicting near fault strong motion, it is important to adequately evaluate the heterogeneity of the slip distribution of the source rupture process as well as the effects of the complex subsurface geology. Since the characteristics of pulse waves derived from forward rupture directivity effects are significantly affected by the size and the slip velocity function of the asperities, it is necessary to evaluate these parameters accurately (Matsushima and Kawase, 1999). In this study, we developed a technique for estimating rupture process assuming distinctive asperities by waveform fitting. In order to take into account of the 3-D subsurface geology in the Green?s functions, we used 3-D reciprocal Green?s functions (RGFs) calculated using the methodology by Graves and Wald (2001). We assumed that the fault geometry and the hypocenter was given, and that the asperity to be estimated was rectangular and on the fault plane. We also assumed that the slip is concentrated only on the asperity. The idea of this technique was as follows. First we calculated strong motions at observation sites using the RGFs for given range of parameters. Then we searched for the best fitting case by grid search technique (Sato et al., 1998). There were eight parameters, which were, location of asperity on the fault plane (X0, Y0), size of asperity (L, W), amplitude (Vd), duration (td), and decay shape parameter (α ) of the slip velocity function, and rake angle (λ ). We assumed that the rise time of the slip velocity function was 0.06 seconds and decays proportional to exp (-α t). The initiation point of the asperity was the closest point to the hypocenter. Numerical experiments showed that we can resolve the asperity model fairly well with good stability. We are planning to extend this technique to multiple asperities and to estimate asperity models for actual earthquakes.

  11. The Kunming CalFit study: modeling dietary behavioral patterns using smartphone data.

    PubMed

    Seto, Edmund; Hua, Jenna; Wu, Lemuel; Bestick, Aaron; Shia, Victor; Eom, Sue; Han, Jay; Wang, May; Li, Yan

    2014-01-01

    Human behavioral interventions aimed at improving health can benefit from objective wearable sensor data and mathematical models. Smartphone-based sensing is particularly practical for monitoring behavioral patterns because smartphones are fairly common, are carried by individuals throughout their daily lives, offer a variety of sensing modalities, and can facilitate various forms of user feedback for intervention studies. We describe our findings from a smartphone-based study, in which an Android-based application we developed called CalFit was used to collect information related to young adults' dietary behaviors. In addition to monitoring dietary patterns, we were interested in understanding contextual factors related to when and where an individual eats, as well as how their dietary intake relates to physical activity (which creates energy demand) and psychosocial stress. 12 participants were asked to use CalFit to record videos of their meals over two 1-week periods, which were translated into nutrient intake by trained dietitians. During this same period, triaxial accelerometry was used to assess each subject's energy expenditure, and GPS was used to record time-location patterns. Ecological momentary assessment was also used to prompt subjects to respond to questions on their phone about their psychological state. The GPS data were processed through a web service we developed called Foodscoremap that is based on the Google Places API to characterize food environments that subjects were exposed to, which may explain and influence dietary patterns. Furthermore, we describe a modeling framework that incorporates all of these information to dynamically infer behavioral patterns that may be used for future intervention studies. PMID:25571578

  12. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    NASA Astrophysics Data System (ADS)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  13. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    PubMed

    Guillera-Arroita, Gurutzeta; Lahoz-Monfort, José J; MacKenzie, Darryl I; Wintle, Brendan A; McCarthy, Michael A

    2014-01-01

    In a recent paper, Welsh, Lindenmayer and Donnelly (WLD) question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  14. Development of a phenomenological model for coal slurry atomization

    SciTech Connect

    Dooher, J.P.

    1995-11-01

    Highly concentrated suspensions of coal particles in water or alternate fluids appear to have a wide range of applications for energy production. For enhanced implementation of coal slurry fuel technology, an understanding of coal slurry atomization as a function coal and slurry properties for specific mechanical configurations of nozzle atomizers should be developed.

  15. Model inversion by parameter fit using NN emulating the forward model: evaluation of indirect measurements.

    PubMed

    Schiller, Helmut

    2007-05-01

    The usage of inverse models to derive parameters of interest from measurements is widespread in science and technology. The operational usage of many inverse models became feasible just by emulation of the inverse model via a neural net (NN). This paper shows how NNs can be used to improve inversion accuracy by minimizing the sum of error squares. The procedure is very fast as it takes advantage of the Jacobian which is a byproduct of the NN calculation. An example from remote sensing is shown. It is also possible to take into account a non-diagonal covariance matrix of the measurement to derive the covariance matrix of the retrieved parameters.

  16. ATOMIC AND MOLECULAR PHYSICS: Model Potential Calculations of Oscillator Strength Spectra of Rydberg Li Atoms in External Fields

    NASA Astrophysics Data System (ADS)

    Meng, Hui-Yan; Shi, Ting-Yun

    2009-08-01

    By combining the B-spline basis set with model potential (B-spline + MP), we present oscillator strength spectra of Rydberg Li atoms in external fields. The photoabsorption spectra are analyzed. Over the narrow energy ranges considered in this paper, the structure of the spectra can be independent of the initial state chosen for a given atom. Our results are in good agreement with previous high-precision experimental data and theoretical calculations, where the R-matrix approach together with multichannel quantum defect theory (R-matrix+MQDT) was used. It is suggested that the present methods can be applied to deal with the oscillator strength spectra of Rydberg atoms in crossed electric and magnetic fields.

  17. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  18. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  19. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking.

  20. Modeling and fitting protein-protein complexes to predict change of binding energy

    PubMed Central

    Dourado, Daniel F.A.R.; Flores, Samuel Coulbourn

    2016-01-01

    It is possible to accurately and economically predict change in protein-protein interaction energy upon mutation (ΔΔG), when a high-resolution structure of the complex is available. This is of growing usefulness for design of high-affinity or otherwise modified binding proteins for therapeutic, diagnostic, industrial, and basic science applications. Recently the field has begun to pursue ΔΔG prediction for homology modeled complexes, but so far this has worked mostly for cases of high sequence identity. If the interacting proteins have been crystallized in free (uncomplexed) form, in a majority of cases it is possible to find a structurally similar complex which can be used as the basis for template-based modeling. We describe how to use MMB to create such models, and then use them to predict ΔΔG, using a dataset consisting of free target structures, co-crystallized template complexes with sequence identify with respect to the targets as low as 44%, and experimental ΔΔG measurements. We obtain similar results by fitting to a low-resolution Cryo-EM density map. Results suggest that other structural constraints may lead to a similar outcome, making the method even more broadly applicable. PMID:27173910

  1. A simple periodic-forced model for dengue fitted to incidence data in Singapore.

    PubMed

    Andraud, Mathieu; Hens, Niel; Beutels, Philippe

    2013-07-01

    Dengue is the world's major arbovirosis and therefore an important public health concern in endemic areas. The availability of weekly reports of dengue cases in Singapore offers the opportunity to analyze the transmission dynamics and the impact of vector control strategies. Based on a previous model studying the impact of vector control strategies in Singapore during the 2005 outbreak, a simple vector-host model accounting for seasonal fluctuation in vector density was developed to estimate the parameters governing the vector population dynamics using dengue fever incidence data from August 2003 to December 2007. The impact of vector control, which consisted principally of a systematic removal of actual and potential breeding sites during a six-week period in 2005, was also investigated. Although our approach does not account for the complex life cycle of the vector, the good fit between data and model outputs showed that the impact of seasonality on the transmission dynamics is highly important. Moreover, the periodic fluctuations of the vector population were found in phase with temperature variations, suggesting a strong climate effect on the vector density and, in turn, on the transmission dynamics.

  2. Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.

    1993-01-01

    The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.

  3. Operation of the computer model for direct atomic oxygen exposure of Earth satellites

    NASA Astrophysics Data System (ADS)

    Bourassa, R. J.; Gruenbaum, P. E.; Gillis, J. R.; Hargraves, C. R.

    1995-08-01

    One of the primary causes of material degradation in low Earth orbit (LEO) is exposure to atomic oxygen. When atomic oxygen molecules collide with an orbiting spacecraft, the relative velocity is 7 to 8 km/sec and the collision energy is 4 to 5 eV per atom. Under these conditions, atomic oxygen may initiate a number of chemical and physical reactions with exposed materials. These reactions contribute to material degradation, surface erosion, and contamination. Interpretation of these effects on materials and the design of space hardware to withstand on-orbit conditions requires quantitative knowledge of the atomic oxygen exposure environment. Atomic oxygen flux is a function of orbit altitude, the orientation of the orbit plan to the Sun, solar and geomagnetic activity, and the angle between exposed surfaces and the spacecraft heading. We have developed a computer model to predict the atomic oxygen exposure of spacecraft in low Earth orbit. The application of this computer model is discussed.

  4. Operation of the computer model for direct atomic oxygen exposure of Earth satellites

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gruenbaum, P. E.; Gillis, J. R.; Hargraves, C. R.

    1995-01-01

    One of the primary causes of material degradation in low Earth orbit (LEO) is exposure to atomic oxygen. When atomic oxygen molecules collide with an orbiting spacecraft, the relative velocity is 7 to 8 km/sec and the collision energy is 4 to 5 eV per atom. Under these conditions, atomic oxygen may initiate a number of chemical and physical reactions with exposed materials. These reactions contribute to material degradation, surface erosion, and contamination. Interpretation of these effects on materials and the design of space hardware to withstand on-orbit conditions requires quantitative knowledge of the atomic oxygen exposure environment. Atomic oxygen flux is a function of orbit altitude, the orientation of the orbit plan to the Sun, solar and geomagnetic activity, and the angle between exposed surfaces and the spacecraft heading. We have developed a computer model to predict the atomic oxygen exposure of spacecraft in low Earth orbit. The application of this computer model is discussed.

  5. Catalytic Efficiency Is a Function of How Rhodium(I) (5 + 2) Catalysts Accommodate a Conserved Substrate Transition State Geometry: Induced Fit Model for Explaining Transition Metal Catalysis

    PubMed Central

    Mustard, Thomas J. L.; Wender, Paul A.; Cheong, Paul Ha-Yeon

    2015-01-01

    The origins of differential catalytic reactivities of four Rh(I) catalysts and their derivatives in the (5 + 2) cycloaddition reaction were elucidated using density functional theory. Computed free energy spans are in excellent agreement with known experimental rates. For every catalyst, the substrate geometries in the transition state remained constant (<0.1 Å RMSD for atoms involved in bond-making and -breaking processes). Catalytic efficiency is shown to be a function of how well the catalyst accommodates the substrate transition state geometry and electronics. This shows that the induced fit model for explaining biological catalysis may be relevant to transition metal catalysis. This could serve as a general model for understanding the origins of efficiencies of catalytic reactions. PMID:26146588

  6. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  7. Atomic Level Anisotropy in the Electrostatic Modeling of Lone Pairs for a Polarizable Force Field Based on the Classical Drude Oscillator.

    PubMed

    Harder, Edward; Anisimov, Victor M; Vorobyov, Igor V; Lopes, Pedro E M; Noskov, Sergei Y; MacKerell, Alexander D; Roux, Benoît

    2006-11-01

    . Furthermore, addition of anisotropic atomic polarizabilities to the virtual site model allows for precise fitting to the local perturbed QM ESP.

  8. Improving the Ni I atomic model for solar and stellar atmospheric models

    SciTech Connect

    Vieytes, M. C.; Fontenla, J. M. E-mail: johnf@digidyna.com

    2013-06-01

    Neutral nickel (Ni I) is abundant in the solar atmosphere and is one of the important elements that contribute to the emission and absorption of radiation in the spectral range between 1900 and 3900 Å. Previously, the Solar Radiation Physical Modeling (SRPM) models of the solar atmosphere only considered a few levels of this species. Here, we improve the Ni I atomic model by taking into account 61 levels and 490 spectral lines. We compute the populations of these levels in full NLTE using the SRPM code and compare the resulting emerging spectrum with observations. The present atomic model significantly improves the calculation of the solar spectral irradiance at near-UV wavelengths, which is important for Earth atmospheric studies, and particularly for ozone chemistry.

  9. Disentangling effects of induced plant defenses and food quantity on herbivores by fitting nonlinear models.

    PubMed

    Morris, W F

    1997-09-01

    Plants can respond to herbivore damage through both broad-scale (systemic) and localized induced responses. While many studies have quantified the impact of systemic responses on herbivores, measuring the impact of localized changes is difficult because plant tissues that have suffered direct damage may represent both a lower quality and a lower quantity of food. This article uses nonlinear models to disentangle the confounding effects of prior herbivory on food quantity and quality. The first (null) model assumes that herbivore performance is determined only by the quantity of food available to an average herbivore. Modified models allow two distinct effects of damage-induced defenses: an increase in the amount of food each herbivore is required to consume in order to achieve maximum performance and a reduction in the maximum performance even when herbivores are fed ad lib. Maximum likelihood methods were used to fit the models to data from field experiments in which Colorado potato beetle (Leptinotarsa decemlineata) larvae were reared on three varieties of potatoes that had been damaged to varying degrees by adult beetles. Prior damage reduced the mean mass of beetles at pupation, and this effect was due to both a decrease in food quantity and induced changes in food quality. In contrast, beetle survival was affected in some cases by reduced food quantity but showed no responses that could be attributed to induced defenses. I discuss this result in the context of previous studies of induced (mostly systemic) responses in the potato-potato beetle system, and I suggest that detailed studies of particular chemical responses and the proposed method of combining bioassays with quantitative models should be used as complementary approaches in future studies of herbivore-induced defenses in plants.

  10. Nonradial p-modes in the G9.5 giant ɛ Ophiuchi? Pulsation model fits to MOST photometry

    NASA Astrophysics Data System (ADS)

    Kallinger, T.; Guenther, D. B.; Matthews, J. M.; Weiss, W. W.; Huber, D.; Kuschnig, R.; Moffat, A. F. J.; Rucinski, S. M.; Sasselov, D.

    2008-02-01

    The G9.5 giant ɛ Oph shows evidence of radial p-mode pulsations in both radial velocity and luminosity. We re-examine the observed frequencies in the photometry and radial velocities and find a best model fit to 18 of the 21 most significant photometric frequencies. The observed frequencies are matched to both radial and nonradial modes in the best model fit. The small scatter of the frequencies about the model predicted frequencies indicate that the average lifetimes of the modes could be as long as 10-20 d. The best fit model itself, constrained only by the observed frequencies, lies within ±1σ of ɛ Oph's position in the HR-diagram and the interferometrically determined radius. Based on data from the MOST satellite, a Canadian Space Agency mission jointly operated by Dynacon, Inc., the University of Toronto Institute of Aerospace Studies, and the University of British Columbia, with assistance from the University of Vienna, Austria.

  11. Multiple linear regression models to fit magnitude using rupture length, rupture width, rupture area, and surface displacement

    NASA Astrophysics Data System (ADS)

    Chu, A.; Zhuang, J.

    2015-12-01

    Wells and Coppersmith (1994) have used fault data to fit simple linear regression (SLR) models to explain linear relations between moment magnitude and logarithms of fault measurements such as rupture length, rupture width, rupture area and surface displacement. Our work extends their analyses to multiple linear regression (MLR) models by considering two or more predictors with updated data. Treating the quantitative variables (rupture length, rupture width, rupture area and surface displacement) as predictors to fit linear regression models on magnitude, we have discovered that the two-predictor model using rupture area and maximum displacement fits the best. The next best alternative predictors are surface length and rupture area. Neither slip type nor slip direction is a significant predictor by fitting of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) models. Corrected Akaike information criterion (Burnham and Anderson, 2002) is used as a model assessment criterion. Comparisons between simple linear regression models of Wells and Coppersmith (1994) and our multiple linear regression models are presented. Our work is done using fault data from Wells and Coppersmith (1994) and new data from Ellswort (2000), Hanks and Bakun (2002, 2008), Shaw (2013), and Finite-Source Rupture Model Database (http://equake-rc.info/SRCMOD/, 2015).

  12. On Eigen's Quasispecies Model, Two-Valued Fitness Landscapes, and Isometry Groups Acting on Finite Metric Spaces.

    PubMed

    Semenov, Yuri S; Novozhilov, Artem S

    2016-05-01

    A two-valued fitness landscape is introduced for the classical Eigen's quasispecies model. This fitness landscape can be considered as a direct generalization of the so-called single- or sharply peaked landscape. A general, non-permutation invariant quasispecies model is studied, and therefore the dimension of the problem is [Formula: see text], where N is the sequence length. It is shown that if the fitness function is equal to [Formula: see text] on a G-orbit A and is equal to w elsewhere, then the mean population fitness can be found as the largest root of an algebraic equation of degree at most [Formula: see text]. Here G is an arbitrary isometry group acting on the metric space of sequences of zeroes and ones of the length N with the Hamming distance. An explicit form of this exact algebraic equation is given in terms of the spherical growth function of the G-orbit A. Motivated by the analysis of the two-valued fitness landscapes, an abstract generalization of Eigen's model is introduced such that the sequences are identified with the points of a finite metric space X together with a group of isometries acting transitively on X. In particular, a simplicial analog of the original quasispecies model is discussed, which can be considered as a mathematical model of the switching of the antigenic variants for some bacteria. PMID:27230609

  13. Coupling of an average-atom model with a collisional-radiative equilibrium model

    SciTech Connect

    Faussurier, G. Blancard, C.; Cossé, P.

    2014-11-15

    We present a method to combine a collisional-radiative equilibrium model and an average-atom model to calculate bound and free electron wavefunctions in hot dense plasmas by taking into account screening. This approach allows us to calculate electrical resistivity and thermal conductivity as well as pressure in non local thermodynamic equilibrium plasmas. Illustrations of the method are presented for dilute titanium plasma.

  14. Evapotranspiration measurement and modeling without fitting parameters in high-altitude grasslands

    NASA Astrophysics Data System (ADS)

    Ferraris, Stefano; Previati, Maurizio; Canone, Davide; Dematteis, Niccolò; Boetti, Marco; Balocco, Jacopo; Bechis, Stefano

    2016-04-01

    Mountain grasslands are important, also because one sixth of the world population lives inside watershed dominated by snowmelt. Also, grasslands provide food to both domestic and selvatic animals. The global warming will probably accelerate the hydrological cycle and increase the drought risk. The combination of measurements, modeling and remote sensing can furnish knowledge in such faraway areas (e.g.: Brocca et al., 2013). A better knowledge of water balance can also allow to optimize the irrigation (e.g.: Canone et al., 2015). This work is meant to build a model of water balance in mountain grasslands, ranging between 1500 and 2300 meters asl. The main input is the Digital Terrain Model, which is more reliable in grasslands than both in the woods and in the built environment. It drives the spatial variability of shortwave solar radiation. The other atmospheric forcings are more problematic to estimate, namely air temperature, wind and longwave radiation. Ad hoc routines have been written, in order to interpolate in space the meteorological hourly time variability. The soil hydraulic properties are less variable than in the plains, but the soil depth estimation is still an open issue. The soil vertical variability has been modeled taking into account the main processes: soil evaporation, root uptake, and fractured bedrock percolation. The time variability latent heat flux and soil moisture results have been compared with the data measured in an eddy covariance station. The results are very good, given the fact that the model has no fitting parameters. The space variability results have been compared with the results of a model based on Landsat 7 and 8 data, applied over an area of about 200 square kilometers. The spatial correlation is quite in agreement between the two models. Brocca et al. (2013). "Soil moisture estimation in alpine catchments through modelling and satellite observations". Vadose Zone Journal, 12(3), 10 pp. Canone et al. (2015). "Field

  15. Model Order Selection for Short Data: An Exponential Fitting Test (EFT)

    NASA Astrophysics Data System (ADS)

    Quinlan, Angela; Barbot, Jean-Pierre; Larzabal, Pascal; Haardt, Martin

    2006-12-01

    High-resolution methods for estimating signal processing parameters such as bearing angles in array processing or frequencies in spectral analysis may be hampered by the model order if poorly selected. As classical model order selection methods fail when the number of snapshots available is small, this paper proposes a method for noncoherent sources, which continues to work under such conditions, while maintaining low computational complexity. For white Gaussian noise and short data we show that the profile of the ordered noise eigenvalues is seen to approximately fit an exponential law. This fact is used to provide a recursive algorithm which detects a mismatch between the observed eigenvalue profile and the theoretical noise-only eigenvalue profile, as such a mismatch indicates the presence of a source. Moreover this proposed method allows the probability of false alarm to be controlled and predefined, which is a crucial point for systems such as RADARs. Results of simulations are provided in order to show the capabilities of the algorithm.

  16. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    PubMed

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. PMID:26144945

  17. Using Item Mean Squares To Evaluate Fit to the Rasch Model.

    ERIC Educational Resources Information Center

    Smith, Richard M.; And Others

    In the mid to late 1970s, considerable research was conducted on the properties of Rasch fit mean squares, resulting in transformations to convert the mean squares into approximate t-statistics. In the late 1980s and the early 1990s, the trend seems to have reversed, with numerous researchers using the untransformed fit mean squares as a means of…

  18. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    PubMed

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. PMID:25038234

  19. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    SciTech Connect

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  20. 'Bubble chamber model' of fast atom bombardment induced processes.

    PubMed

    Kosevich, Marina V; Shelkovsky, Vadim S; Boryak, Oleg A; Orlov, Vadim V

    2003-01-01

    A hypothesis concerning FAB mechanisms, referred to as a 'bubble chamber FAB model', is proposed. This model can provide an answer to the long-standing question as to how fragile biomolecules and weakly bound clusters can survive under high-energy particle impact on liquids. The basis of this model is a simple estimation of saturated vapour pressure over the surface of liquids, which shows that all liquids ever tested by fast atom bombardment (FAB) and liquid secondary ion mass spectrometry (SIMS) were in the superheated state under the experimental conditions applied. The result of the interaction of the energetic particles with superheated liquids is known to be qualitatively different from that with equilibrium liquids. It consists of initiation of local boiling, i.e., in formation of vapour bubbles along the track of the energetic particle. This phenomenon has been extensively studied in the framework of nuclear physics and provides the basis for construction of the well-known bubble chamber detectors. The possibility of occurrence of similar processes under FAB of superheated liquids substantiates a conceptual model of emission of secondary ions suggested by Vestal in 1983, which assumes formation of bubbles beneath the liquid surface, followed by their bursting accompanied by release of microdroplets and clusters as a necessary intermediate step for the creation of molecular ions. The main distinctive feature of the bubble chamber FAB model, proposed here, is that the bubbles are formed not in the space and time-restricted impact-excited zone, but in the nearby liquid as a 'normal' boiling event, which implies that the temperature both within the bubble and in the droplets emerging on its burst is practically the same as that of the bulk liquid sample. This concept can resolve the paradox of survival of intact biomolecules under FAB, since the part of the sample participating in the liquid-gas transition via the bubble mechanism has an ambient temperature

  1. Do We Need Multiple Models of Auditory Verbal Hallucinations? Examining the Phenomenological Fit of Cognitive and Neurological Models

    PubMed Central

    Jones, Simon R.

    2010-01-01

    The causes of auditory verbal hallucinations (AVHs) are still unclear. The evidence for 2 prominent cognitive models of AVHs, one based on inner speech, the other on intrusions from memory, is briefly reviewed. The fit of these models, as well as neurological models, to the phenomenology of AVHs is then critically examined. It is argued that only a minority of AVHs, such as those with content clearly relating to verbalizations experienced surrounding previous trauma, are consistent with cognitive AVHs-as-memories models. Similarly, it is argued that current neurological models are only phenomenologically consistent with a limited subset of AVHs. In contrast, the phenomenology of the majority of AVHs, which involve voices attempting to regulate the ongoing actions of the voice hearer, are argued to be more consistent with inner speech–based models. It is concluded that subcategorizations of AVHs may be necessary, with each underpinned by different neurocognitive mechanisms. The need to study what is termed the dynamic developmental progression of AVHs is also highlighted. Future empirical research is suggested in this area. PMID:18820262

  2. Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo

    Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.

  3. Modeling, Simulation and Data Fitting of the Charge Injected Diodes (CID) for SLHC Tracking Applications

    SciTech Connect

    Li, Z.; Eremin, V.; Harkonen, J.; Luukka, P.; Tuominen, E.; Tuovinen, E.; Verbitskaya, E.

    2009-10-27

    Modeling and simulations have been performed for the charge injected diodes (CID) for the application in SLHC. MIP-induced current and charges have been calculated for segmented detectors with various radiation fluences, up to the highest SLHC fluence of 1 x 10{sup 16} n{sub eq}/cm{sup 2}. Although the main advantage of CID detectors is their virtual full depletion at any radiation fluence at a modest bias voltage (<600 V), the simulation of CID and fitting to the existing data have shown that the CID operation mode also reduces the free carrier trapping, resulting in a much higher charge collection at the SLHC fluence than that in a standard Si detector. The reduction in free carrier trapping by almost one order of magnitude is due to the fact that the CID mode also pre-fills the traps, making them neutral and not active in trapping. It has been found that, electron traps can be pre-filled by injection of electrons from the n{sup +} contact, and hole traps can be pre-filled by injection of holes from the p{sup +} contact. The CID mode of detector operation can be achieved by a modestly low temperature of around -40 C, achievable by the proposed CO{sub 2} cooling for detector upgrades in SLHC. High charge collection comparable to the 3D electrode Si detectors makes the CID Si detector a valuable alternative for SLHC detectors for its much easier fabrication process.

  4. Regulation of Neutrophil Degranulation and Cytokine Secretion: A Novel Model Approach Based on Linear Fitting

    PubMed Central

    Naegelen, Isabelle; Beaume, Nicolas; Plançon, Sébastien; Schenten, Véronique; Tschirhart, Eric J.; Bréchard, Sabrina

    2015-01-01

    Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i) define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii) investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes. PMID:26579547

  5. Facultative control of matrix production optimizes competitive fitness in Pseudomonas aeruginosa PA14 biofilm models.

    PubMed

    Madsen, Jonas S; Lin, Yu-Cheng; Squyres, Georgia R; Price-Whelan, Alexa; de Santiago Torio, Ana; Song, Angela; Cornell, William C; Sørensen, Søren J; Xavier, Joao B; Dietrich, Lars E P

    2015-12-01

    As biofilms grow, resident cells inevitably face the challenge of resource limitation. In the opportunistic pathogen Pseudomonas aeruginosa PA14, electron acceptor availability affects matrix production and, as a result, biofilm morphogenesis. The secreted matrix polysaccharide Pel is required for pellicle formation and for colony wrinkling, two activities that promote access to O2. We examined the exploitability and evolvability of Pel production at the air-liquid interface (during pellicle formation) and on solid surfaces (during colony formation). Although Pel contributes to the developmental response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure formation in response to electron acceptor limitation is unique to specific biofilm models and that the facultative control of Pel production is required for PA14 to maintain optimum benefit in different types of communities.

  6. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  7. Model selection and validation of extreme distribution by goodness-of-fit test based on conditional position

    NASA Astrophysics Data System (ADS)

    Abidin, Nahdiya Zainal; Adam, Mohd Bakri

    2014-09-01

    In Extreme Value Theory, the important aspect of model extrapolation is to model the extreme behavior. This is because the choice of the extreme value distribution model affects the prediction that is about to be made. Thus, model validation which is called Goodness-of-fit (GoF) test is necessary. In this study, the GoF tests were used to fit the Generalized Extreme Value (GEV) Type-II model against the simulated observed values. The μ, σ and ξ were estimated by Maximum Likelihood. The critical values based on conditional points were developed by Monte-Carlo simulation. The powers of the tests were identified by power study. The data that is distributed according to GEV Type-II distribution was used to test whether the critical values developed are able to confirm the fit between GEV Type-II model and the data. To confirm the fit, the statistics value of the GOF test should be smaller than the critical value.

  8. An atomic finite element model for biodegradable polymers. Part 2. A model for change in Young's modulus due to polymer chain scission.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton

    2015-11-01

    Atomic simulations were undertaken to analyse the effect of polymer chain scission on amorphous poly(lactide) during degradation. Many experimental studies have analysed mechanical properties degradation but relatively few computation studies have been conducted. Such studies are valuable for supporting the design of bioresorbable medical devices. Hence in this paper, an Effective Cavity Theory for the degradation of Young's modulus was developed. Atomic simulations indicated that a volume of reduced-stiffness polymer may exist around chain scissions. In the Effective Cavity Theory, each chain scission is considered to instantiate an effective cavity. Finite Element Analysis simulations were conducted to model the effect of the cavities on Young's modulus. Since polymer crystallinity affects mechanical properties, the effect of increases in crystallinity during degradation on Young's modulus is also considered. To demonstrate the ability of the Effective Cavity Theory, it was fitted to several sets of experimental data for Young's modulus in the literature.

  9. Modeling three-dimensional network formation with an atomic lattice model: application to silicic acid polymerization.

    PubMed

    Jin, Lin; Auerbach, Scott M; Monson, Peter A

    2011-04-01

    We present an atomic lattice model for studying the polymerization of silicic acid in sol-gel and related processes for synthesizing silica materials. Our model is based on Si and O atoms occupying the sites of a body-centered-cubic lattice, with all atoms arranged in SiO(4) tetrahedra. This is the simplest model that allows for variation in the Si-O-Si angle, which is largely responsible for the versatility in silica polymorphs. The model describes the assembly of polymerized silica structures starting from a solution of silicic acid in water at a given concentration and pH. This model can simulate related materials-chalcogenides and clays-by assigning energy penalties to particular ring geometries in the polymerized structures. The simplicity of this approach makes it possible to study the polymerization process to higher degrees of polymerization and larger system sizes than has been possible with previous atomistic models. We have performed Monte Carlo simulations of the model at two concentrations: a low density state similar to that used in the clear solution synthesis of silicalite-1, and a high density state relevant to experiments on silica gel synthesis. For the high concentration system where there are NMR data on the temporal evolution of the Q(n) distribution, we find that the model gives good agreement with the experimental data. The model captures the basic mechanism of silica polymerization and provides quantitative structural predictions on ring-size distributions in good agreement with x-ray and neutron diffraction data.

  10. Fitting hidden Markov models of protein domains to a target species: application to Plasmodium falciparum

    PubMed Central

    2012-01-01

    Background Hidden Markov Models (HMMs) are a powerful tool for protein domain identification. The Pfam database notably provides a large collection of HMMs which are widely used for the annotation of proteins in new sequenced organisms. In Pfam, each domain family is represented by a curated multiple sequence alignment from which a profile HMM is built. In spite of their high specificity, HMMs may lack sensitivity when searching for domains in divergent organisms. This is particularly the case for species with a biased amino-acid composition, such as P. falciparum, the main causal agent of human malaria. In this context, fitting HMMs to the specificities of the target proteome can help identify additional domains. Results Using P. falciparum as an example, we compare approaches that have been proposed for this problem, and present two alternative methods. Because previous attempts strongly rely on known domain occurrences in the target species or its close relatives, they mainly improve the detection of domains which belong to already identified families. Our methods learn global correction rules that adjust amino-acid distributions associated with the match states of HMMs. These rules are applied to all match states of the whole HMM library, thus enabling the detection of domains from previously absent families. Additionally, we propose a procedure to estimate the proportion of false positives among the newly discovered domains. Starting with the Pfam standard library, we build several new libraries with the different HMM-fitting approaches. These libraries are first used to detect new domain occurrences with low E-values. Second, by applying the Co-Occurrence Domain Discovery (CODD) procedure we have recently proposed, the libraries are further used to identify likely occurrences among potential domains with higher E-values. Conclusion We show that the new approaches allow identification of several domain families previously absent in the P. falciparum proteome

  11. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    NASA Technical Reports Server (NTRS)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  12. Producing High-Accuracy Lattice Models from Protein Atomic Coordinates Including Side Chains

    PubMed Central

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M.

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109

  13. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    PubMed

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models. PMID:22934109

  14. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    PubMed

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models.

  15. The fitting of general force-of-infection models to wildlife disease prevalence data

    USGS Publications Warehouse

    Heisey, D.M.; Joly, D.O.; Messier, F.

    2006-01-01

    Researchers and wildlife managers increasingly find themselves in situations where they must deal with infectious wildlife diseases such as chronic wasting disease, brucellosis, tuberculosis, and West Nile virus. Managers are often charged with designing and implementing control strategies, and researchers often seek to determine factors that influence and control the disease process. All of these activities require the ability to measure some indication of a disease's foothold in a population and evaluate factors affecting that foothold. The most common type of data available to managers and researchers is apparent prevalence data. Apparent disease prevalence, the proportion of animals in a sample that are positive for the disease, might seem like a natural measure of disease's foothold, but several properties, in particular, its dependency on age structure and the biasing effects of disease-associated mortality, make it less than ideal. In quantitative epidemiology, the a??force of infection,a?? or infection hazard, is generally the preferred parameter for measuring a disease's foothold, and it can be viewed as the most appropriate way to a??adjusta?? apparent prevalence for age structure. The typical ecology curriculum includes little exposure to quantitative epidemiological concepts such as cumulative incidence, apparent prevalence, and the force of infection. The goal of this paper is to present these basic epidemiological concepts and resulting models in an ecological context and to illustrate how they can be applied to understand and address basic epidemiological questions. We demonstrate a practical approach to solving the heretofore intractable problem of fitting general force-of-infection models to wildlife prevalence data using a generalized regression approach. We apply the procedures to Mycobacterium bovis (bovine tuberculosis) prevalence in bison (Bison bison) in Wood Buffalo National Park, Canada, and demonstrate strong age dependency in the force of

  16. Flowering genes in Metrosideros fit a broad herbaceous model encompassing Arabidopsis and Antirrhinum.

    PubMed

    Sreekantan, Lekha; Clemens, John; McKenzie, Marian J.; Lenton, John R.; Croker, Steve J.; Jameson, Paula E.

    2004-05-01

    Molecular studies were conducted on Metrosideros excelsa to determine if the current genetic models for flowering with regard to inflorescence and floral meristem identity genes in annual plants were applicable to a woody perennial. MEL, MESAP1 and METFL1, the fragments of LEAFY (LFY), APETALA1 (AP1) and TERMINAL FLOWER1 (TFL1) equivalents, respectively, were isolated from M. excelsa. Temporal expression patterns showed that MEL and MESAP1 exhibited a bimodal pattern of expression. Expression exhibited during early floral initiation in autumn was followed by down-regulation during winter, and up-regulation in spring as floral organogenesis occurred. Spatial expression patterns of MEL showed that it had greater similarity to FLORICAULA (FLO) than to LFY, whereas MESAP1 was more similar to AP1 than SQUAMOSA. The interaction between MEL and METFL1 was more similar to the interaction between FLO and CENTRORADIALIS than that between LFY and TFL1. Consequently, the three genes from M. excelsa fit a broader herbaceous model encompassing Antirrhinum as well as Arabidopsis, but with differences, such as the bimodal pattern of expression seen with MEL and MESAP1. In mid-winter, at the time when both MEL and MESAP1 were down-regulated, GA(1) was below the level of detection in M. excelsa buds. Even though application of gibberellin inhibits flowering in members of the Myrtaceae, MEL was responsive to gibberellin with expression in juvenile plants up-regulated by GA(3). However, MESAP1 was not up-regulated indicating that meristem competence was also probably required to promote flowering in M. excelsa. PMID:15086830

  17. The fitting of general force-of-infection models to wildlife disease prevalence data.

    PubMed

    Heisey, Dennis M; Joly, Damien O; Messier, François

    2006-09-01

    Researchers and wildlife managers increasingly find themselves in situations where they must deal with infectious wildlife diseases such as chronic wasting disease, brucellosis, tuberculosis, and West Nile virus. Managers are often charged with designing and implementing control strategies, and researchers often seek to determine factors that influence and control the disease process. All of these activities require the ability to measure some indication of a disease's foothold in a population and evaluate factors affecting that foothold. The most common type of data available to managers and researchers is apparent prevalence data. Apparent disease prevalence, the proportion of animals in a sample that are positive for the disease, might seem like a natural measure of disease's foothold, but several properties, in particular, its dependency on age structure and the biasing effects of disease-associated mortality, make it less than ideal. In quantitative epidemiology, the "force of infection," or infection hazard, is generally the preferred parameter for measuring a disease's foothold, and it can be viewed as the most appropriate way to "adjust" apparent prevalence for age structure. The typical ecology curriculum includes little exposure to quantitative epidemiological concepts such as cumulative incidence, apparent prevalence, and the force of infection. The goal of this paper is to present these basic epidemiological concepts and resulting models in an ecological context and to illustrate how they can be applied to understand and address basic epidemiological questions. We demonstrate a practical approach to solving the heretofore intractable problem of fitting general force-of-infection models to wildlife prevalence data using a generalized regression approach. We apply the procedures to Mycobacterium bovis (bovine tuberculosis) prevalence in bison (Bison bison) in Wood Buffalo National Park, Canada, and demonstrate strong age dependency in the force of

  18. Near-atomic resolution structural model of the yeast 26S proteasome

    PubMed Central

    Beck, Florian; Unverdorben, Pia; Bohn, Stefan; Schweitzer, Andreas; Pfeifer, Günter; Sakata, Eri; Nickell, Stephan; Plitzko, Jürgen M.; Villa, Elizabeth; Baumeister, Wolfgang; Förster, Friedrich

    2012-01-01

    The 26S proteasome operates at the executive end of the ubiquitin-proteasome pathway. Here, we present a cryo-EM structure of the Saccharomyces cerevisiae 26S proteasome at a resolution of 7.4 Å or 6.7 Å (Fourier-Shell Correlation of 0.5 or 0.3, respectively). We used this map in conjunction with molecular dynamics-based flexible fitting to build a near-atomic resolution model of the holocomplex. The quality of the map allowed us to assign α-helices, the predominant secondary structure element of the regulatory particle subunits, throughout the entire map. We were able to determine the architecture of the Rpn8/Rpn11 heterodimer, which had hitherto remained elusive. The MPN domain of Rpn11 is positioned directly above the AAA-ATPase N-ring suggesting that Rpn11 deubiquitylates substrates immediately following commitment and prior to their unfolding by the AAA-ATPase module. The MPN domain of Rpn11 dimerizes with that of Rpn8 and the C-termini of both subunits form long helices, which are integral parts of a coiled-coil module. Together with the C-terminal helices of the six PCI-domain subunits they form a very large coiled-coil bundle, which appears to serve as a flexible anchoring device for all the lid subunits. PMID:22927375

  19. Near-atomic resolution structural model of the yeast 26S proteasome.

    PubMed

    Beck, Florian; Unverdorben, Pia; Bohn, Stefan; Schweitzer, Andreas; Pfeifer, Günter; Sakata, Eri; Nickell, Stephan; Plitzko, Jürgen M; Villa, Elizabeth; Baumeister, Wolfgang; Förster, Friedrich

    2012-09-11

    The 26S proteasome operates at the executive end of the ubiquitin-proteasome pathway. Here, we present a cryo-EM structure of the Saccharomyces cerevisiae 26S proteasome at a resolution of 7.4 Å or 6.7 Å (Fourier-Shell Correlation of 0.5 or 0.3, respectively). We used this map in conjunction with molecular dynamics-based flexible fitting to build a near-atomic resolution model of the holocomplex. The quality of the map allowed us to assign α-helices, the predominant secondary structure element of the regulatory particle subunits, throughout the entire map. We were able to determine the architecture of the Rpn8/Rpn11 heterodimer, which had hitherto remained elusive. The MPN domain of Rpn11 is positioned directly above the AAA-ATPase N-ring suggesting that Rpn11 deubiquitylates substrates immediately following commitment and prior to their unfolding by the AAA-ATPase module. The MPN domain of Rpn11 dimerizes with that of Rpn8 and the C-termini of both subunits form long helices, which are integral parts of a coiled-coil module. Together with the C-terminal helices of the six PCI-domain subunits they form a very large coiled-coil bundle, which appears to serve as a flexible anchoring device for all the lid subunits.

  20. Development of a Stellar Model-Fitting Pipeline for Asteroseismic Data from the TESS Mission

    NASA Astrophysics Data System (ADS)

    Metcalfe, Travis

    The launch of NASA's Kepler space telescope in 2009 revolutionized the quality and quantity of observational data available for asteroseismic analysis. Prior to the Kepler mission, solar-like oscillations were extremely difficult to observe, and data only existed for a handful of the brightest stars in the sky. With the necessity of studying one star at a time, the traditional approach to extracting the physical properties of the star from the observations was an uncomfortably subjective process. A variety of experts could use similar tools but come up with significantly different answers. Not only did this subjectivity have the potential to undermine the credibility of the technique, it also hindered the compilation of a uniform sample that could be used to draw broader physical conclusions from the ensemble of results. During a previous award from NASA, we addressed these issues by developing an automated and objective stellar model-fitting pipeline for Kepler data, and making it available through the Asteroseismic Modeling Portal (AMP). This community modeling tool has allowed us to derive reliable asteroseismic radii, masses and ages for large samples of stars (Metcalfe et al. 2014), but the most recent observations are so precise that we are now limited by systematic uncertainties associated with our stellar models. With a huge archive of Kepler data available for model validation, and the next planet-hunting satellite already approved for an expected launch in 2017, now is the time to incorporate what we have learned into the next generation of AMP. We propose to improve the reliability of our estimates of stellar properties over the next 4 years by collaborating with two open-source development projects that will augment and ultimately replace the stellar evolution and pulsation models that we now use in AMP. Our current treatment of the oscillations does not include the effects of radiative or convective heat-exchange, nor does it account for the influence

  1. Solid lipid microparticles produced by spray congealing: influence of the atomizer on microparticle characteristics and mathematical modeling of the drug release.

    PubMed

    Passerini, Nadia; Qi, Sheng; Albertini, Beatrice; Grassi, Mario; Rodriguez, Lorenzo; Craig, Duncan Q M

    2010-02-01

    The first aim of the work was to evaluate the effect of atomizer design on the properties of solid lipid microparticles produced by spray congealing. Two different air atomizers have been employed: a conventional air pressure nozzle (APN) and a recently developed atomizer (wide pneumatic nozzle, WPN). Milled theophylline and Compritol 888ATO were used to produce microparticles at drug-to-carrier ratios of 10:90, 20:80, and 30:70 using the two atomizers. The results showed that the application of different nozzles had significant impacts on the morphology, encapsulation efficiency, and drug release behavior of the microparticles. In contrast, the characteristics of the atomizer did not influence the physicochemical properties of the microparticles as differential scanning calorimetry, Hot Stage microscopy, X-ray powder diffraction, and Fourier transform infrared spectroscopy analysis demonstrated. The drug and the lipid carrier presented in their original crystalline forms in both WPN and APN systems. A second objective of this study was to develop a novel mathematical model for describing the dynamic process of drug release from the solid lipid microparticles. For WPN microparticles the model predicted the changes of the drug release behavior with particle size and drug loading, while for APN microparticles the model fitting was not as good as for the WPN systems, confirming the influence of the atomizer on the drug release behavior.

  2. Project Physics Reader 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    As a supplement to Project Physics Unit 5, a collection of articles is presented in this reader for student browsing. Nine excerpts are given under the following headings: failure and success, Einstein, Mr. Tompkins and simultaneity, parable of the surveyors, outside and inside the elevator, the teacher and the Bohr theory of atom, Dirac and Born,…

  3. The effects of a peer modeling intervention on cardiorespiratory fitness parameters and self-efficacy in obese adolescents.

    PubMed

    De Jesus, Stefanie; Prapavessis, Harry

    2013-01-01

    Inconsistencies exist in the assessment and interpretation of peak VO2 in the pediatric obese population, as cardiorespiratory fitness assessments are effort-dependent and psychological variables prevalent in this population must be addressed. This study examined the effect of a peer modeling intervention on cardiorespiratory fitness performance and task self-efficacy in obese youth completing a maximal treadmill test. Forty-nine obese (BMI ≥ 95th percentile for age and sex) youth were randomized to an experimental (received an intervention) or to a control group. The outcome variables were mean and variability cardiorespiratory fitness (peak VO2, heart rate, duration, respiratory exchange ratio), rating of perceived exertion, and task self-efficacy scores. Irrespective of whether a mean or variability score was used, receiving the intervention was associated with non-significant trends in fitness parameters and task self-efficacy over time, favoring the experimental group. Cardiorespiratory fitness and task self-efficacy were moderately correlated at both time points. To elucidate the aforementioned findings, psychosocial factors affecting obese youth and opportunities to modify the peer modeling intervention should be considered. Addressing these factors has the potential to improve standard of care in a clinical setting regarding pretest patient education.

  4. Resolution-Adapted All-Atomic and Coarse-Grained Model for Biomolecular Simulations.

    PubMed

    Shen, Lin; Hu, Hao

    2014-06-10

    We develop here an adaptive multiresolution method for the simulation of complex heterogeneous systems such as the protein molecules. The target molecular system is described with the atomistic structure while maintaining concurrently a mapping to the coarse-grained models. The theoretical model, or force field, used to describe the interactions between two sites is automatically adjusted in the simulation processes according to the interaction distance/strength. Therefore, all-atomic, coarse-grained, or mixed all-atomic and coarse-grained models would be used together to describe the interactions between a group of atoms and its surroundings. Because the choice of theory is made on the force field level while the sampling is always carried out in the atomic space, the new adaptive method preserves naturally the atomic structure and thermodynamic properties of the entire system throughout the simulation processes. The new method will be very useful in many biomolecular simulations where atomistic details are critically needed.

  5. Symmetric eikonal model for projectile-electron excitation and loss in relativistic ion-atom collisions

    SciTech Connect

    Voitkiv, A. B.; Najjari, B.; Shevelko, V. P.

    2010-08-15

    At impact energies > or approx. 1 GeV/u the projectile-electron excitation and loss occurring in collisions between highly charged ions and neutral atoms is already strongly influenced by the presence of atomic electrons. To treat these processes in collisions with heavy atoms we generalize the symmetric eikonal model, used earlier for considerations of electron transitions in ion-atom collisions within the scope of a three-body Coulomb problem. We show that at asymptotically high collision energies this model leads to an exact transition amplitude and is very well suited to describe the projectile-electron excitation and loss at energies above a few GeV/u. In particular, by considering a number of examples we demonstrate advantages of this model over the first Born approximation at impact energies of {approx}1-30 GeV/u, which are of special interest for atomic physics experiments at the future GSI facilities.

  6. Identifying Atomic Structure as a Threshold Concept: Student Mental Models and Troublesomeness

    ERIC Educational Resources Information Center

    Park, Eun Jung; Light, Gregory

    2009-01-01

    Atomic theory or the nature of matter is a principal concept in science and science education. This has, however, been complicated by the difficulty students have in learning the concept and the subsequent construction of many alternative models. To understand better the conceptual barriers to learning atomic structure, this study explores the…

  7. Atomic charges for modeling metal–organic frameworks: Why and how

    SciTech Connect

    Hamad, Said Balestra, Salvador R.G.; Bueno-Perez, Rocio; Calero, Sofia; Ruiz-Salvador, A. Rabdel

    2015-03-15

    Atomic partial charges are parameters of key importance in the simulation of Metal–Organic Frameworks (MOFs), since Coulombic interactions decrease with the distance more slowly than van der Waals interactions. But despite its relevance, there is no method to unambiguously assign charges to each atom, since atomic charges are not quantum observables. There are several methods that allow the calculation of atomic charges, most of them starting from the electronic wavefunction or the electronic density or the system, as obtained with quantum mechanics calculations. In this work, we describe the most common methods employed to calculate atomic charges in MOFs. In order to show the influence that even small variations of structure have on atomic charges, we present the results that we obtained for DMOF-1. We also discuss the effect that small variations of atomic charges have on the predicted structural properties of IRMOF-1. - Graphical abstract: We review the different method with which to calculate atomic partial charges that can be used in force field-based calculations. We also present two examples that illustrate the influence of the geometry on the calculated charges and the influence of the charges on structural properties. - Highlights: • The choice of atomic charges is crucial in modeling adsorption and diffusion in MOFs. • Methods for calculating atomic charges in MOFs are reviewed. • We discuss the influence of the framework geometry on the calculated charges. • We discuss the influence of the framework charges on structural the properties.

  8. The Effect of Fitting a Unidimensional IRT Model to Multidimensional Data in Content-Balanced Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Song, Tian

    2010-01-01

    This study investigates the effect of fitting a unidimensional IRT model to multidimensional data in content-balanced computerized adaptive testing (CAT). Unconstrained CAT with the maximum information item selection method is chosen as the baseline, and the performances of three content balancing procedures, the constrained CAT (CCAT), the…

  9. Limited-Information Goodness-of-Fit Testing of Diagnostic Classification Item Response Theory Models. CRESST Report 840

    ERIC Educational Resources Information Center

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2014-01-01

    It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…

  10. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  11. Promoting Fitness and Safety in Elementary Students: A Randomized Control Study of the Michigan Model for Health

    ERIC Educational Resources Information Center

    O'Neill, James M.; Clark, Jeffrey K.; Jones, James A.

    2016-01-01

    Background: In elementary grades, comprehensive health education curricula have demonstrated effectiveness in addressing singular health issues. The Michigan Model for Health (MMH) was implemented and evaluated to determine its impact on nutrition, physical fitness, and safety knowledge and skills. Methods: Schools (N = 52) were randomly assigned…

  12. A Person-Centered Approach to P-E Fit Questions Using a Multiple-Trait Model.

    ERIC Educational Resources Information Center

    De Fruyt, Filip

    2002-01-01

    Employed college students (n=401) completed the Self-Directed Search and NEO Personality Inventory-Revised. Person-environment fit across Holland's six personality types predicted job satisfaction and skill development. Five-Factor Model traits significantly predicted intrinsic career outcomes. Use of the five-factor, person-centered approach to…

  13. The Use of the L[subscript z] and L[subscript z]* Person-Fit Statistics and Problems Derived from Model Misspecification

    ERIC Educational Resources Information Center

    Meijer, Rob R.; Tendeiro, Jorge N.

    2012-01-01

    We extend a recent didactic by Magis, Raiche, and Beland on the use of the l[subscript z] and l[subscript z]* person-fit statistics. We discuss a number of possibly confusing details and show that it is important to first investigate item response theory model fit before assessing person fit. Furthermore, it is argued that appropriate…

  14. Facultative Control of Matrix Production Optimizes Competitive Fitness in Pseudomonas aeruginosa PA14 Biofilm Models

    PubMed Central

    Madsen, Jonas S.; Lin, Yu-Cheng; Squyres, Georgia R.; Price-Whelan, Alexa; de Santiago Torio, Ana; Song, Angela; Cornell, William C.; Sørensen, Søren J.

    2015-01-01

    As biofilms grow, resident cells inevitably face the challenge of resource limitation. In the opportunistic pathogen Pseudomonas aeruginosa PA14, electron acceptor availability affects matrix production and, as a result, biofilm morphogenesis. The secreted matrix polysaccharide Pel is required for pellicle formation and for colony wrinkling, two activities that promote access to O2. We examined the exploitability and evolvability of Pel production at the air-liquid interface (during pellicle formation) and on solid surfaces (during colony formation). Although Pel contributes to the developmental response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure formation in response to electron acceptor limitation is unique to specific biofilm models and that the facultative control of Pel production is required for PA14 to maintain optimum benefit in different types of communities. PMID:26431965

  15. Mg I as a probe of the solar chromosphere - The atomic model

    NASA Technical Reports Server (NTRS)

    Mauas, Pablo J.; Avrett, Eugene H.; Loeser, Rudolf

    1988-01-01

    This paper presents a complete atomic model for Mg I line synthesis, where all the atomic parameters are based on recent experimental and theoretical data. It is shown how the computed profiles at 4571 A and 5173 A are influenced by the choice of these parameters and the number of levels included in the model atom. In addition, observed profiles of the 5173 A b2 line and theoretical profiles for comparison (based on a recent atmospheric model for the average quiet sun) are presented.

  16. Linking the Fits, Fitting the Links: Connecting Different Types of PO Fit to Attitudinal Outcomes

    ERIC Educational Resources Information Center

    Leung, Aegean; Chaturvedi, Sankalp

    2011-01-01

    In this paper we explore the linkages among various types of person-organization (PO) fit and their effects on employee attitudinal outcomes. We propose and test a conceptual model which links various types of fits--objective fit, perceived fit and subjective fit--in a hierarchical order of cognitive information processing and relate them to…

  17. A hierarchy of local electron correlation models based on atomic truncations

    NASA Astrophysics Data System (ADS)

    Head-Gordon, Martin; Lee, Michael S.; Maslen, Paul E.

    1999-11-01

    While wavefunction-based treatments of electron correlation have been very successful for the study of small molecules, they cannot be readily applied to large molecules because their computational cost rises too steeply with molecular size. For example, second order Møller-Plesset perturbation theory (MP2), the simplest such method, involves computational costs that asymptotically increase with the 5th power of molecular size. In this article we discuss the development of new local electron correlation models that ameliorate this problem, by truncating the number of substituted determinants that are included in the correlation treatment. Using atom-centered functions to span the occupied and virtual subspaces permits the truncations to be made by an atomic criterion, that satisfies all of the requirements of a well-defined theoretical model chemistry. The double substitutions that arise in MP2 theory generally involve promoting electrons from occupied orbitals on two atoms to unoccupied (virtual) orbitals on two other atoms, or tetra-atomics in molecules. The simplest restriction is to require one occupied and one virtual orbital to be on a common atom, leading to a triatomics in molecules (TRIM) model. A stronger approximation is to model double substitutions by the direct product of two such atomic excitations, which is a diatomics in molecules (DIM) model of electron correlation. The still more drastic approximation of forcing all double substitutions to be centered on single atoms, cannot describe dispersion interactions, and is not considered here. The theory of the DIM and TRIM models is outlined, and methods for obtaining the atom-centered functions spanning the occupied and virtual subspaces are discussed. Some numerical results are provided to compare the performance of the DIM and TRIM models against untruncated MP2 theory. Finally the outlook for the application of these methods to large molecules is discussed.

  18. Modelization of nanospace interaction involving a ferromagnetic atom: a spin polarization effect study by thermogravimetric analysis.

    PubMed

    Santhanam, K S V; Chen, Xu; Gupta, S

    2014-04-01

    Ab initio studies of ferromagnetic atom interacting with carbon nanotubes have been reported in the literature that predict when the interaction is strong, a higher hybridization with confinement effect will result in spin polarization in the ferromagnetic atom. The spin polarization effect on the thermal oxidation to form its oxide is modeled here for the ferromagnetic atom and its alloy, as the above studies predict the 4s electrons are polarized in the atom. The four models developed here provide a pathway for distinguishing the type of interaction that exists in the real system. The extent of spin polarization in the ferromagnetic atom has been examined by varying the amount of carbon nanotubes in the composites in the thermogravimetric experiments. In this study we report the experimental results on the CoNi alloy which appears to show selective spin polarization. The products of the thermal oxidation has been analyzed by Fourier Transform Infrared Spectroscopy. PMID:24734699

  19. Monte Carlo Computational Modeling of the Energy Dependence of Atomic Oxygen Undercutting of Protected Polymers

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo

    1998-01-01

    A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.

  20. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.