Science.gov

Sample records for fitting atomic models

  1. Note: curve fit models for atomic force microscopy cantilever calibration in water.

    PubMed

    Kennedy, Scott J; Cole, Daniel G; Clark, Robert L

    2011-11-01

    Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%. © 2011 American Institute of Physics

  2. Putting structure into context: fitting of atomic models into electron microscopic and electron tomographic reconstructions.

    PubMed

    Volkmann, Niels

    2012-02-01

    A complete understanding of complex dynamic cellular processes such as cell migration or cell adhesion requires the integration of atomic level structural information into the larger cellular context. While direct atomic-level information at the cellular level remains inaccessible, electron microscopy, electron tomography and their associated computational image processing approaches have now matured to a point where sub-cellular structures can be imaged in three dimensions at the nanometer scale. Atomic-resolution information obtained by other means can be combined with this data to obtain three-dimensional models of large macromolecular assemblies in their cellular context. This article summarizes some recent advances in this field. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  4. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  5. AquaSAXS: a web server for computation and fitting of SAXS profiles with non-uniformally hydrated atomic models

    PubMed Central

    Poitevin, Frédéric; Orland, Henri; Doniach, Sebastian; Koehl, Patrice; Delarue, Marc

    2011-01-01

    Small Angle X-ray Scattering (SAXS) techniques are becoming more and more useful for structural biologists and biochemists, thanks to better access to dedicated synchrotron beamlines, better detectors and the relative easiness of sample preparation. The ability to compute the theoretical SAXS profile of a given structural model, and to compare this profile with the measured scattering intensity, yields crucial structural informations about the macromolecule under study and/or its complexes in solution. An important contribution to the profile, besides the macromolecule itself and its solvent-excluded volume, is the excess density due to the hydration layer. AquaSAXS takes advantage of recently developed methods, such as AquaSol, that give the equilibrium solvent density map around macromolecules, to compute an accurate SAXS/WAXS profile of a given structure and to compare it to the experimental one. Here, we describe the interface architecture and capabilities of the AquaSAXS web server (http://lorentz.dynstr.pasteur.fr/aquasaxs.php). PMID:21665925

  6. Computer Modeling Of Atomization

    NASA Technical Reports Server (NTRS)

    Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.

    1994-01-01

    Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.

  7. Stationary Electron Atomic Model

    NASA Astrophysics Data System (ADS)

    Pressler, David E.

    1998-04-01

    I will present a novel theory concerning the position and nature of the electron inside the atom. This new concept is consistant with present experimental evidence and adheres strictly to the valence-shell electron-pair repulsion (VSEPR) model presently used in chemistry for predicting the shapes of molecules and ions. In addition, I will discuss the atomic model concept as being a true harmonic oscillator, periodic motion at resonant frequency which produces radiation at discrete frequencies or line spectra is possible because the electron is under the action of two restoring forces, electrostatic attraction and superconducting respulsion of the electron's magnetic field by the nucleus.

  8. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  9. Total force fitness: the military family fitness model.

    PubMed

    Bowles, Stephen V; Pollock, Liz Davenport; Moore, Monique; Wadsworth, Shelley MacDermid; Cato, Colanda; Dekle, Judith Ward; Meyer, Sonia Wei; Shriver, Amber; Mueller, Bill; Stephens, Mark; Seidler, Dustin A; Sheldon, Joseph; Picano, James; Finch, Wanda; Morales, Ricardo; Blochberger, Sean; Kleiman, Matthew E; Thompson, Daniel; Bates, Mark J

    2015-03-01

    The military lifestyle can create formidable challenges for military families. This article describes the Military Family Fitness Model (MFFM), a comprehensive model aimed at enhancing family fitness and resilience across the life span. This model is intended for use by Service members, their families, leaders, and health care providers but also has broader applications for all families. The MFFM has three core components: (1) family demands, (2) resources (including individual resources, family resources, and external resources), and (3) family outcomes (including related metrics). The MFFM proposes that resources from the individual, family, and external areas promote fitness, bolster resilience, and foster well-being for the family. The MFFM highlights each resource level for the purpose of improving family fitness and resilience over time. The MFFM both builds on existing family strengths and encourages the development of new family strengths through resource-acquiring behaviors. The purpose of this article is to (1) expand the military's Total Force Fitness (TFF) intent as it relates to families and (2) offer a family fitness model. This article will summarize relevant evidence, provide supportive theory, describe the model, and proffer metrics that support the dimensions of this model.

  10. A model for airblast atomization

    SciTech Connect

    Rizk, N.K.; Mongia, H.C.

    1989-01-01

    The objective of fuel injection modeling activities is generally to give support to the atomizer design effort to achieve improved spray quality. in gas turbine combustors, enhanced atomization is essential for satisfactory performance, since droplet sizes can have direct impact on almost all key aspects of combustion. A model that includes the integration of the submodels of air flow, fuel injection and atomization, and droplets turbulent dispersion has been formulated. The model was applied to an airblast atomizer that incorporated a short prefilming device. The predictions were validated against two-component phase Doppler interferometry data of that atomizer. The results of the present investigation demonstrate the capability of the developed model to predict satisfactorily the air flow field and spray characteristics. They indicate the need for detailed measurements in the near field of atomizer in order to quantitatively verify the modeling of the initial atomization processes in this region. 15 refs.

  11. Coaches as Fitness Role Models

    ERIC Educational Resources Information Center

    Nichols, Randall; Zillifro, Traci D.; Nichols, Ronald; Hull, Ethan E.

    2012-01-01

    The lack of physical activity, low fitness levels, and elevated obesity rates as high as 32% of today's youth are well documented. Many strategies and grants have been developed at the national, regional, and local levels to help counteract these current trends. Strategies have been developed and implemented for schools, households (parents), and…

  12. Coaches as Fitness Role Models

    ERIC Educational Resources Information Center

    Nichols, Randall; Zillifro, Traci D.; Nichols, Ronald; Hull, Ethan E.

    2012-01-01

    The lack of physical activity, low fitness levels, and elevated obesity rates as high as 32% of today's youth are well documented. Many strategies and grants have been developed at the national, regional, and local levels to help counteract these current trends. Strategies have been developed and implemented for schools, households (parents), and…

  13. Sensitivity of Fit Indices to Model Misspecification and Model Types

    ERIC Educational Resources Information Center

    Fan, Xitao; Sivo, Stephen A.

    2007-01-01

    The search for cut-off criteria of fit indices for model fit evaluation (e.g., Hu & Bentler, 1999) assumes that these fit indices are sensitive to model misspecification, but not to different types of models. If fit indices were sensitive to different types of models that are misspecified to the same degree, it would be very difficult to establish…

  14. Atom Interferometer Modeling Tool

    DTIC Science & Technology

    2011-08-08

    definition is to import conductor geometry from an outside CAD tool such as AutoCAD . This allows users to specify the more complex layouts using a...fully-featured tool of their choice, while significantly reducing the complexity of LiveAtom. Furthermore, most groups have already been using a 2D ...specifying conductor geometry LiveAtom offers the user a 3D visualization of their experiment. Once the experiment is fully specified, computing the

  15. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  16. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  17. Biomedical model fitting and error analysis.

    PubMed

    Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

    2011-09-20

    This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

  18. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  19. Are Physical Education Majors Models for Fitness?

    ERIC Educational Resources Information Center

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  20. Semi-exact concentric atomic density fitting: Reduced cost and increased accuracy compared to standard density fitting

    SciTech Connect

    Hollman, David S.; Schaefer, Henry F.; Valeev, Edward F.

    2014-02-14

    A local density fitting scheme is considered in which atomic orbital (AO) products are approximated using only auxiliary AOs located on one of the nuclei in that product. The possibility of variational collapse to an unphysical “attractive electron” state that can affect such density fitting [P. Merlot, T. Kjærgaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, and T. B. Pedersen, J. Comput. Chem. 34, 1486 (2013)] is alleviated by including atom-wise semidiagonal integrals exactly. Our approach leads to a significant decrease in the computational cost of density fitting for Hartree–Fock theory while still producing results with errors 2–5 times smaller than standard, nonlocal density fitting. Our method allows for large Hartree–Fock and density functional theory computations with exact exchange to be carried out efficiently on large molecules, which we demonstrate by benchmarking our method on 200 of the most widely used prescription drug molecules. Our new fitting scheme leads to smooth and artifact-free potential energy surfaces and the possibility of relatively simple analytic gradients.

  1. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  2. Contrast Gain Control Model Fits Masking Data

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  3. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  4. Polycrystal models to fit experiments

    SciTech Connect

    Kocks, U.F.; Necker, C.T.

    1994-07-01

    Two problems in the modeling of polycrystal plasticity are addressed in which some parameter can best be determined by matching with experiment, although the principles of the underlying mechanisms are presumed known. One of these problems is the transition from ``full constraints`` (FC) to ``relaxed constraints`` (RC) with increasing flatness of the grains. Observed qualitative transitions in texture with strain, such as a transient orthotropic symmetry in torsion textures, can help identify the rate at which the FC-to-RC transition takes place. The second problem is that of the material dependence of deformation textures among the FCC metals which, it is argued, can only be due to a change in deformation modes, i.e., in the shape of the single-crystal yield surface. A heuristic assumption of an increasing importance of (111)<211>-slip as the stacking-fault energy decreases explains the qualitative trend. The quantitative parameter needed has been determined for copper from a match of prediction and experiment over a range of strains.

  5. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  6. Recent advances in atomic modeling

    SciTech Connect

    Goldstein, W.H.

    1988-10-12

    Precision spectroscopy of solar plasmas has historically been the goad for advances in calculating the atomic physics and dynamics of highly ionized atoms. Recent efforts to understand the laboratory plasmas associated with magnetic and inertial confinement fusion, and with X-ray laser research, have played a similar role. Developments spurred by laboratory plasma research are applicable to the modeling of high-resolution spectra from both solar and cosmic X-ray sources, such as the photoionized plasmas associated with accretion disks. Three of these developments in large scale atomic modeling are reviewed: a new method for calculating large arrays of collisional excitation rates, a sum rule based method for extending collisional-radiative models and modeling the effects of autoionizing resonances, and a detailed level accounting calculation of resonant excitation rates in FeXVII. 21 refs., 5 figs., 2 tabs.

  7. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    NASA Astrophysics Data System (ADS)

    Scott, Erin; Serpetti, Natalia; Steenbeek, Jeroen; Heymans, Johanna Jacomina

    The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE) models to observation reference data (Mackinson et al. 2009). The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting > 1000 specific individual searches to find the statistically 'best fit' model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the 'best fit' model for ecological accuracy.

  8. A Quantum Model of Atoms (the Energy Levels of Atoms).

    ERIC Educational Resources Information Center

    Rafie, Francois

    2001-01-01

    Discusses the model for all atoms which was developed on the same basis as Bohr's model for the hydrogen atom. Calculates the radii and the energies of the orbits. Demonstrates how the model obeys the de Broglie's hypothesis that the moving electron exhibits both wave and particle properties. (Author/ASK)

  9. A Quantum Model of Atoms (the Energy Levels of Atoms).

    ERIC Educational Resources Information Center

    Rafie, Francois

    2001-01-01

    Discusses the model for all atoms which was developed on the same basis as Bohr's model for the hydrogen atom. Calculates the radii and the energies of the orbits. Demonstrates how the model obeys the de Broglie's hypothesis that the moving electron exhibits both wave and particle properties. (Author/ASK)

  10. A predictive fitness model for influenza

    NASA Astrophysics Data System (ADS)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  11. A predictive fitness model for influenza.

    PubMed

    Luksza, Marta; Lässig, Michael

    2014-03-06

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  12. Fitting models to correlated data (large samples)

    NASA Astrophysics Data System (ADS)

    Féménias, Jean-Louis

    2004-03-01

    The study of the ordered series of residuals of a fit proved to be useful in evaluating separately the pure experimental error and the model bias leading to a possible improvement of the modeling [J. Mol. Spectrosc. 217 (2003) 32]. In the present work this procedure is extended to homogeneous correlated data. This new method allows a separate estimation of pure experimental error, model bias, and data correlation; furthermore, it brings a new insight into the difference between goodness of fit and model relevance. It can be considered either as a study of 'random systematic errors' or as an extended approach of the Durbin-Watson problem [Biometrika 37 (1950) 409] taking into account the model error. In the present work an empirical approach is proposed for large samples ( n⩾500) where numerical tests are done showing the accuracy and the limits of the method.

  13. SPEX (Plasma Code Spectral Fitting Tool). Collisional ionization for atoms and ions of H to Zn.

    NASA Astrophysics Data System (ADS)

    Urdampilleta, I.; Kaastra, J. S.

    2017-03-01

    Every observation of astrophysical objects involving a spectrum requires atomic data for the interpretation of line fluxes, ratios and ionization state of the emitting plasma. One of processes which determines it is collisional ionization. In this study an update of the direct ionization (DI) and excitation-autoionization (EA) processes is discussed for the H to Zn-like isoelectronic sequences. The previous assessments were performed by Dere (2007, A&A 466, 771) for H to Zn isoelectronc sequences, Arnaud & Raymond (1992, ApJ. 398, 394) for Fe and Arnaud & Rothenflug (1985, A&AS, 60, 425). However, in the last years new laboratory measurements and theoretical calculations of ionization cross sections have become accessible. We provide a review, extension and update of this previous work and fit the cross sections of all individuals shells of all ions from H to Zn. These data are described using an extension of Younger's formula, suitable for integration over a Maxwellian velocity distribution to derive the subshell ionization rate coefficients. These ionization rate coefficients are included together with the radiative recombination rates data (Mao et al. 2016, A&AS, 27568) and a change-exchange model (Gu et al. 2016, A&A 588, A52, 11) into the high-resolution plasma code and spectral fit tool SPEX V3.0 (Kaastra et al. 1996, UV and X-ray Spectroscopy of Astrophysical and Laboratory Plasmas).

  14. Modeling and Fitting Exoplanet Transit Light Curves

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, G. T.

    2013-01-01

    We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the

  15. "Electronium": A Quantum Atomic Teaching Model.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Outlines an alternative atomic model to the probability model, the descriptive quantum atomic model Electronium. Discusses the way in which it is intended to support students in learning quantum-mechanical concepts. (Author/MM)

  16. "Electronium": A Quantum Atomic Teaching Model.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Outlines an alternative atomic model to the probability model, the descriptive quantum atomic model Electronium. Discusses the way in which it is intended to support students in learning quantum-mechanical concepts. (Author/MM)

  17. Geometry-dependent atomic multipole models for the water molecule

    NASA Astrophysics Data System (ADS)

    Loboda, O.; Millot, C.

    2017-10-01

    Models of atomic electric multipoles for the water molecule have been optimized in order to reproduce the electric potential around the molecule computed by ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set. Different models of increasing complexity, from atomic charges up to models containing atomic charges, dipoles, and quadrupoles, have been obtained. The geometry dependence of these atomic multipole models has been investigated by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For several models, the atomic multipole components have been fitted as a function of the geometry by a Taylor series of fourth order in monomer coordinate displacements.

  18. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  19. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  20. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  1. A liquid drop model for embedded atom method cluster energies

    NASA Technical Reports Server (NTRS)

    Finley, C. W.; Abel, P. B.; Ferrante, J.

    1996-01-01

    Minimum energy configurations for homonuclear clusters containing from two to twenty-two atoms of six metals, Ag, Au, Cu, Ni, Pd, and Pt have been calculated using the Embedded Atom Method (EAM). The average energy per atom as a function of cluster size has been fit to a liquid drop model, giving estimates of the surface and curvature energies. The liquid drop model gives a good representation of the relationship between average energy and cluster size. As a test the resulting surface energies are compared to EAM surface energy calculations for various low-index crystal faces with reasonable agreement.

  2. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    ERIC Educational Resources Information Center

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  3. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    ERIC Educational Resources Information Center

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  4. Can atom-surface potential measurements test atomic structure models?

    PubMed

    Lonij, Vincent P A; Klauss, Catherine E; Holmgren, William F; Cronin, Alexander D

    2011-06-30

    van der Waals (vdW) atom-surface potentials can be excellent benchmarks for atomic structure calculations. This is especially true if measurements are made with two different types of atoms interacting with the same surface sample. Here we show theoretically how ratios of vdW potential strengths (e.g., C₃(K)/C₃(Na)) depend sensitively on the properties of each atom, yet these ratios are relatively insensitive to properties of the surface. We discuss how C₃ ratios depend on atomic core electrons by using a two-oscillator model to represent the contribution from atomic valence electrons and core electrons separately. We explain why certain pairs of atoms are preferable to study for future experimental tests of atomic structure calculations. A well chosen pair of atoms (e.g., K and Na) will have a C₃ ratio that is insensitive to the permittivity of the surface, whereas a poorly chosen pair (e.g., K and He) will have a ratio of C₃ values that depends more strongly on the permittivity of the surface.

  5. Subshell fitting of relativistic atomic core electron densities for use in QTAIM analyses of ECP-based wave functions.

    PubMed

    Keith, Todd A; Frisch, Michael J

    2011-11-17

    Scalar-relativistic, all-electron density functional theory (DFT) calculations were done for free, neutral atoms of all elements of the periodic table using the universal Gaussian basis set. Each core, closed-subshell contribution to a total atomic electron density distribution was separately fitted to a spherical electron density function: a linear combination of s-type Gaussian functions. The resulting core subshell electron densities are useful for systematically and compactly approximating total core electron densities of atoms in molecules, for any atomic core defined in terms of closed subshells. When used to augment the electron density from a wave function based on a calculation using effective core potentials (ECPs) in the Hamiltonian, the atomic core electron densities are sufficient to restore the otherwise-absent electron density maxima at the nuclear positions and eliminate spurious critical points in the neighborhood of the atom, thus enabling quantum theory of atoms in molecules (QTAIM) analyses to be done in the neighborhoods of atoms for which ECPs were used. Comparison of results from QTAIM analyses with all-electron, relativistic and nonrelativistic molecular wave functions validates the use of the atomic core electron densities for augmenting electron densities from ECP-based wave functions. For an atom in a molecule for which a small-core or medium-core ECPs is used, simply representing the core using a simplistic, tightly localized electron density function is actually sufficient to obtain a correct electron density topology and perform QTAIM analyses to obtain at least semiquantitatively meaningful results, but this is often not true when a large-core ECP is used. Comparison of QTAIM results from augmenting ECP-based molecular wave functions with the realistic atomic core electron densities presented here versus augmenting with the limiting case of tight core densities may be useful for diagnosing the reliability of large-core ECP models in

  6. Evaluation of model fit in nonlinear multilevel structural equation modeling

    PubMed Central

    Schermelleh-Engel, Karin; Kerwer, Martin; Klein, Andreas G.

    2013-01-01

    Evaluating model fit in nonlinear multilevel structural equation models (MSEM) presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are non-normally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of non-normality, they have not yet been investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated. PMID:24624110

  7. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  8. An Investigation of Goodness of Model Data Fit

    ERIC Educational Resources Information Center

    Onder, Ismail

    2007-01-01

    IRT models' advantages can only be realized when the model fits the data set of interest. Therefore, this study aimed to investigate which IRT model will provide the best fit to the data obtained from OZDEBYR OSS 2004 D-II Exam Science Test. In goodness-of-fit analysis, first the model assumptions and then the expected model features were checked.…

  9. An Investigation of Goodness of Model Data Fit

    ERIC Educational Resources Information Center

    Onder, Ismail

    2007-01-01

    IRT models' advantages can only be realized when the model fits the data set of interest. Therefore, this study aimed to investigate which IRT model will provide the best fit to the data obtained from OZDEBYR OSS 2004 D-II Exam Science Test. In goodness-of-fit analysis, first the model assumptions and then the expected model features were checked.…

  10. The best-fit universe. [cosmological models

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  11. A Model Fit Statistic for Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  12. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  13. A general algorithm for fitting efficiently triple differential cross sections of atomic double photoionization

    NASA Astrophysics Data System (ADS)

    Argenti, Luca; Colle, Renato

    2008-12-01

    We propose an effective procedure to fit triple differential cross sections of atomic double photoionization processes, which is based on a general expression of the transition amplitude between arbitrary states of the target atom and the parent ion, with the transition operator expressed at any order of its multipolar expansion. The major advantage of our expression, which in the dipole approximation is equivalent to those of Manakov (1996 J. Phys. B: At. Mol. Opt. Phys. 29 2711) and Malegat (1997 J. Phys. B: At. Mol. Opt. Phys. 30 251), is that it is expressed only in terms of elementary angular functions (Clebsch-Gordan coefficients, spherical harmonics and 6 - j factors). Therefore our expression can be easily implemented in a general code for any kinematic condition and any order of the multipolar expansion of the transition operator. Our fitting procedure takes into account also the finite instrumental resolution in measuring energies and angles. Test calculations on helium and argon show that this further capability is often essential to remove important discrepancies between simulated and measured angular distributions.

  14. New model-fitting and model-completion programs for automated iterative nucleic acid refinement.

    PubMed

    Yamashita, Keitaro; Zhou, Yong; Tanaka, Isao; Yao, Min

    2013-06-01

    In the past decade many structures of nucleic acids have been determined, which have contributed to our understanding of their biological functions. However, crystals containing nucleic acids often diffract X-rays poorly. This makes electron-density interpretation difficult and requires a great deal of expertise in crystallography and knowledge of nucleic acid structure. Here, new programs called NAFIT and NABUILD for fitting and extending nucleic acid models are presented. These programs can be used as modules in the automated refinement system LAFIRE, as well as acting as independent programs. NAFIT performs sequential grouped fitting with empirical torsion-angle restraints and antibumping restraints including H atoms. NABUILD extends the model using a skeletonized map in a coarse-grained manner. It has been shown that NAFIT greatly improves electron-density fit and geometric quality and that iterative refinement with NABUILD significantly reduces the Rfree factor.

  15. Effectiveness of the Sport Education Fitness Model on Fitness Levels, Knowledge, and Physical Activity

    ERIC Educational Resources Information Center

    Pritchard, Tony; Hansen, Andrew; Scarboro, Shot; Melnic, Irina

    2015-01-01

    The purpose of this study was to investigate changes in fitness levels, content knowledge, physical activity levels, and participants' perceptions following the implementation of the sport education fitness model (SEFM) at a high school. Thirty-two high school students participated in 20 lessons using the SEFM. Aerobic capacity, muscular…

  16. Effectiveness of the Sport Education Fitness Model on Fitness Levels, Knowledge, and Physical Activity

    ERIC Educational Resources Information Center

    Pritchard, Tony; Hansen, Andrew; Scarboro, Shot; Melnic, Irina

    2015-01-01

    The purpose of this study was to investigate changes in fitness levels, content knowledge, physical activity levels, and participants' perceptions following the implementation of the sport education fitness model (SEFM) at a high school. Thirty-two high school students participated in 20 lessons using the SEFM. Aerobic capacity, muscular…

  17. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher's Geometric Model?

    PubMed

    Blanquart, François; Bataillon, Thomas

    2016-06-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes-including the mean and SD of selection and epistasis coefficients-it was often unable to explain the full structure of fitness landscapes.

  18. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Obreschkow, D.

    2015-09-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.

  19. Nagaoka's atomic model and hyperfine interactions.

    PubMed

    Inamura, Takashi T

    2016-01-01

    The prevailing view of Nagaoka's "Saturnian" atom is so misleading that today many people have an erroneous picture of Nagaoka's vision. They believe it to be a system involving a 'giant core' with electrons circulating just outside. Actually, though, in view of the Coulomb potential related to the atomic nucleus, Nagaoka's model is exactly the same as Rutherford's. This is true of the Bohr atom, too. To give proper credit, Nagaoka should be remembered together with Rutherford and Bohr in the history of the atomic model. It is also pointed out that Nagaoka was a pioneer of understanding hyperfine interactions in order to study nuclear structure.

  20. Deviance statistics in model fit and selection in ROC studies

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    A general non-linear regression model-based Bayesian inference approach is used in our ROC (Receiver Operating Characteristics) study. In the sampling of posterior distribution, two prior models - continuous Gaussian and discrete categorical - are used for the scale parameter. How to judge Goodness-of-Fit (GOF) of each model and how to criticize these two models, Deviance statistics and Deviance information criterion (DIC) are adopted to address these problems. Model fit and model selection focus on the adequacy of models. Judging model adequacy is essentially measuring agreement of model and observations. Deviance statistics and DIC provide overall measures on model fit and selection. In order to investigate model fit at each category of observations, we find that the cumulative, exponential contributions from individual observations to Deviance statistics are good estimates of FPF (false positive fraction) and TPF (true positive fraction) on which the ROC curve is based. This finding further leads to a new measure for model fit, called FPF-TPF distance, which is an Euclidean distance defined on FPF-TPF space. It combines both local and global fitting. Deviance statistics and FPFTPF distance are shown to be consistent and in good agreement. Theoretical derivation and numerical simulations for this new method for model fit and model selection of ROC data analysis are included. Keywords: General non-linear regression model, Bayesian Inference, Markov Chain Monte Carlo (MCMC) method, Goodness-of-Fit (GOF), Model selection, Deviance statistics, Deviance information criterion (DIC), Continuous conjugate prior, Discrete categorical prior. ∗

  1. Curve fitting methods for solar radiation data modeling

    SciTech Connect

    Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  2. The Hydrogen Atom: The Rutherford Model

    NASA Astrophysics Data System (ADS)

    Tilton, Homer Benjamin

    1996-06-01

    Early this century Ernest Rutherford established the nuclear model of the hydrogen atom, presently taught as representing the best visual model after modification by Niels Bohr and Arnold Sommerfeld. It replaced the so-called "plum pudding" model of J. J. Thomson which held sway previously. While the Rutherford model represented a large step forward in our understanding of the hydrogen atom, questions remained, and still do.

  3. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  4. Goodness of Model-Data Fit and Invariant Measurement

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Perkins, Aminah

    2013-01-01

    In this commentary, Englehard and Perkins remark that Maydeu-Olivares has presented a framework for evaluating the goodness of model-data fit for item response theory (IRT) models and correctly points out that overall goodness-of-fit evaluations of IRT models and data are not generally explored within most applications in educational and…

  5. Goodness of Model-Data Fit and Invariant Measurement

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Perkins, Aminah

    2013-01-01

    In this commentary, Englehard and Perkins remark that Maydeu-Olivares has presented a framework for evaluating the goodness of model-data fit for item response theory (IRT) models and correctly points out that overall goodness-of-fit evaluations of IRT models and data are not generally explored within most applications in educational and…

  6. Atempts to link Quanta & Atoms before the Bohr Atom model

    NASA Astrophysics Data System (ADS)

    Venkatesan, A.; Lieber, M.

    2005-03-01

    Attempts to quantize atomic phenomena before Bohr are hardly ever mentioned in elementary textbooks.This presentation will elucidate the contributions of A.Haas around 1910. Haas tried to quantize the Thomson atom model as an optical resonator made of positive and negative charges. The inherent ambiguity of charge distribution in the model made him choose a positive spherical distribution around which the electrons were distributed.He obtained expressions for the Rydberg constant and what is known today as the Bohr radius by balancing centrifugal energy with Coulomb energy and quantizing it with Planck's relation E=hν. We point out that Haas would have arrived at better estimates of these constants had he used the virial theorem apart from the fact that the fundamental constants were not well known. The crux of Haas's physical picture was to derive Planck's constant h from charge quantum e , mass of electron m and atomic radius. Haas faced severe criticism for applying thermodynamic concepts like Planck distribution to microscopic phenomena. We will try to give a flavor for how quantum phenomena were viewed at that time. It is of interest to note that the driving force behind Haas's work was to present a paper that would secure him a position as a Privatdozent in History of Physics. We end with comments by Bohr and Sommerfeld on Haas's work and with some brief biographical remarks.

  7. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  8. Resampling methods for model fitting and model selection.

    PubMed

    Babu, G Jogesh

    2011-11-01

    Resampling procedures for fitting models and model selection are considered in this article. Nonparametric goodness-of-fit statistics are generally based on the empirical distribution function. The distribution-free property of these statistics does not hold in the multivariate case or when some of the parameters are estimated. Bootstrap methods to estimate the underlying distributions are discussed in such cases. The results hold not only in the case of one-dimensional parameter space, but also for the vector parameters. Bootstrap methods for inference, when the data is from an unknown distribution that may or may not belong to a specified family of distributions, are also considered. Most of the information criteria-based model selection procedures such as the Akaike information criterion, Bayesian information criterion, and minimum description length use estimation of bias. The bias, which is inevitable in model selection problems, arises mainly from estimating the distance between the "true" model and an estimated model. A jackknife type procedure for model selection is discussed, which instead of bias estimation is based on bias reduction.

  9. Modeling of atomic systems for atomic clocks and quantum information

    NASA Astrophysics Data System (ADS)

    Arora, Bindiya

    This dissertation reports the modeling of atomic systems for atomic clocks and quantum information. This work is motivated by the prospects of optical frequency standards with trapped ions and the quantum computation proposals with neutral atoms in optical lattices. Extensive calculations of the electric-dipole matrix elements in monovalent atoms are conducted using the relativistic all-order method. This approach is a linearized version of the coupled-cluster method, which sums infinite sets of many-body perturbation theory terms. All allowed transitions between the lowest ns, np1/2, np 3/2 states and a large number of excited states of alkali-metal atoms are evaluated using the all-order method. For Ca+ ion, additional allowed transitions between nd5/2, np 3/2, nf5/2, nf 7/2 states and a large number of excited states are evaluated. We combine D1 lines measurements by Miller et al. [18] with our all-order calculations to determine the values of the electric-dipole matrix elements for the 4pj - 3d j' transitions in K and for the 5pj - 4dj' transitions in Rb to high precision. The resulting electric-dipole matrix elements are used for the high-precision calculation of frequency-dependent polarizabilities of ground state of alkali atoms. Our values of static polarizabilities are found to be in excellent agreement with available experiments. Calculations were done for the wavelength in the range 300--1600 nm, with particular attention to wavelengths of common infrared lasers. We parameterize our results so that they can be extended accurately to arbitrary wavelengths above 800 nm. Our data can be used to predict the oscillation frequencies of optically-trapped atoms, and particularly the ratios of frequencies of different species held in the same trap. We identify wavelengths at which two different alkali atoms have the same oscillation frequency. We present results of all-order calculations of static and frequency-dependent polarizabilities of excited np1/2 and np3

  10. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Atomic modeling of cryo-electron microscopy reconstructions – Joint refinement of model and imaging parameters

    PubMed Central

    Chapman, Michael S.; Trzynka, Andrew; Chapman, Brynmor K.

    2013-01-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5–2.5 Å at resolutions of 4.5–6 Å. PMID:23376441

  12. HDFITS: Porting the FITS data model to HDF5

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  13. Modeling Atom Probe Tomography: A review.

    PubMed

    Vurpillot, F; Oberdorfer, C

    2015-12-01

    Improving both the precision and the accuracy of Atom Probe Tomography reconstruction requires a correct understanding of the imaging process. In this aim, numerical modeling approaches have been developed for 15 years. The injected ingredients of these modeling tools are related to the basic physic of the field evaporation mechanism. The interplay between the sample nature and structure of the analyzed sample and the reconstructed image artefacts have pushed to gradually improve and make the model more and more sophisticated. This paper reviews the evolution of the modeling approach in Atom Probe Tomography and presents some future potential directions in order to improve the method.

  14. Uncertainties in forces extracted from non-contact atomic force microscopy measurements by fitting of long-range background forces.

    PubMed

    Sweetman, Adam; Stannard, Andrew

    2014-01-01

    In principle, non-contact atomic force microscopy (NC-AFM) now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired 'short-range' force from the experimental observable (frequency shift) is often far from trivial. In most cases there is a significant contribution to the total tip-sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the 'on-minus-off' method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method.

  15. Uncertainties in forces extracted from non-contact atomic force microscopy measurements by fitting of long-range background forces

    PubMed Central

    Stannard, Andrew

    2014-01-01

    Summary In principle, non-contact atomic force microscopy (NC-AFM) now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired ‘short-range’ force from the experimental observable (frequency shift) is often far from trivial. In most cases there is a significant contribution to the total tip–sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the ‘on-minus-off’ method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method. PMID:24778964

  16. Consequences of Fitting Nonidentified Latent Class Models

    ERIC Educational Resources Information Center

    Abar, Beau; Loken, Eric

    2012-01-01

    Latent class models are becoming more popular in behavioral research. When models with a large number of latent classes relative to the number of manifest indicators are estimated, researchers must consider the possibility that the model is not identified. It is not enough to determine that the model has positive degrees of freedom. A well-known…

  17. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    NASA Astrophysics Data System (ADS)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  18. Atomization data for spray combustion modeling

    NASA Technical Reports Server (NTRS)

    Ferrenberg, A. J.; Varma, M. S.

    1985-01-01

    Computer models that simulate the energy release processes in spray combustion are highly dependent upon the quality of atomization data utilized. This paper presents results of analyses performed with a state-of-the-art rocket combustion code, demonstrating the important effects of initial droplet sizes and size distributions on combustion losses. Also, the questionable aspects and inapplicability of the generally available atomization data are discussed. One important and misunderstood aspect of the atomization process is the difference between spatial (concentration) and flux (temporal) droplet size distributions. These are addressed, and a computer model developed to assess this difference is described and results presented. Finally, experimental results are shown that demonstrate the often neglected effects of the local gas velocity field on the atomization process.

  19. Atomization data for spray combustion modeling

    NASA Technical Reports Server (NTRS)

    Ferrenberg, A. J.; Varma, M. S.

    1985-01-01

    Computer models that simulate the energy release processes in spray combustion are highly dependent upon the quality of atomization data utilized. This paper presents results of analyses performed with a state-of-the-art rocket combustion code, demonstrating the important effects of initial droplet sizes and size distributions on combustion losses. Also, the questionable aspects and inapplicability of the generally available atomization data are discussed. One important and misunderstood aspect of the atomization process is the difference between spatial (concentration) and flux (temporal) droplet size distributions. These are addressed, and a computer model developed to assess this difference is described and results presented. Finally, experimental results are shown that demonstrate the often neglected effects of the local gas velocity field on the atomization process.

  20. Testing proportionality in the proportional odds model fitted with GEE.

    PubMed

    Stiger, T R; Barnhart, H X; Williamson, J M

    1999-06-15

    Generalized estimating equations (GEE) methodology as proposed by Liang and Zeger has received widespread use in the analysis of correlated binary data. Miller et al. and Lipsitz et al. extended GEE to correlated nominal and ordinal categorical data; in particular, they used GEE for fitting McCullagh's proportional odds model. In this paper, we consider robust (that is, empirically corrected) and model-based versions of both a score test and a Wald test for assessing the assumption of proportional odds in the proportional odds model fitted with GEE. The Wald test is based on fitting separate multiple logistic regression models for each dichotomization of the response variable, whereas the score test requires fitting just the proportional odds model. We evaluate the proposed tests in small to moderate samples by simulating data from a series of simple models. We illustrate the use of the tests on three data sets from medical studies.

  1. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  2. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    SciTech Connect

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; Schmidt, Martin U.

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  3. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    DOE PAGES

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; ...

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may bemore » used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.« less

  4. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function.

    PubMed

    Prill, Dragica; Juhás, Pavol; Billinge, Simon J L; Schmidt, Martin U

    2016-01-01

    A method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  5. Assessing Fit of Unidimensional Graded Response Models Using Bayesian Methods

    ERIC Educational Resources Information Center

    Zhu, Xiaowen; Stone, Clement A.

    2011-01-01

    The posterior predictive model checking method is a flexible Bayesian model-checking tool and has recently been used to assess fit of dichotomous IRT models. This paper extended previous research to polytomous IRT models. A simulation study was conducted to explore the performance of posterior predictive model checking in evaluating different…

  6. Atomization data requirements for rocket combustor modeling

    NASA Technical Reports Server (NTRS)

    Ferrenberg, A. J.; Varma, M. S.

    1984-01-01

    The complex computer codes, which model liquid rocket combustors, require information about the distribution and atomization of these liquid reactants. The available information is, in general, of questionable validity and applicability. Authors and users of combustion codes are often unaware of, or underestimate the importance of, these deficiencies in atomization data. These deficiencies and their importance are examined. Results of analyses performed with a state-of-the-art rocket combustion code are presented which demonstrate the important effects of such atomization information as initial droplet sizes and size distribution on vaporization rate and losses. Also, the questionable aspects and inapplicability of the available atomization data are discussed. One important and often neglected or misunderstood aspect of atomization data is the differences between spatial (concentration) and flux (often called temporal) droplet size distributions. These are described, and a computer model constructed to assess the difference between concentration and flux droplet size distributions is described and results presented. Experimental data are also given to demonstrate this difference. Finally, experimental results are presented that demonstrate the very great, and often neglected effect, of the local gas velocity field on atomization.

  7. Students' Mental Models of Atomic Spectra

    ERIC Educational Resources Information Center

    Körhasan, Nilüfer Didis; Wang, Lu

    2016-01-01

    Mental modeling, which is a theory about knowledge organization, has been recently studied by science educators to examine students' understanding of scientific concepts. This qualitative study investigates undergraduate students' mental models of atomic spectra. Nine second-year physics students, who have already taken the basic chemistry and…

  8. Students' Mental Models of Atomic Spectra

    ERIC Educational Resources Information Center

    Körhasan, Nilüfer Didis; Wang, Lu

    2016-01-01

    Mental modeling, which is a theory about knowledge organization, has been recently studied by science educators to examine students' understanding of scientific concepts. This qualitative study investigates undergraduate students' mental models of atomic spectra. Nine second-year physics students, who have already taken the basic chemistry and…

  9. Planning: Can One Model Fit Two Institutions?

    ERIC Educational Resources Information Center

    Mims, R.S.; And Others

    1983-01-01

    Rational college planning approaches presuppose certain organizational characteristics such as a predictable environment, stable programs, and some consensus on priorities. Institutions with these characteristics will use rational planning models more easily. The experiences of two different universities using a common, rational model are…

  10. Derivation of Distributed Models of Atomic Polarizability for Molecular Simulations.

    PubMed

    Soteras, Ignacio; Curutchet, Carles; Bidon-Chanal, Axel; Dehez, François; Ángyán, János G; Orozco, Modesto; Chipot, Christophe; Luque, F Javier

    2007-11-01

    The main thrust of this investigation is the development of models of distributed atomic polarizabilities for the treatment of induction effects in molecular mechanics simulations. The models are obtained within the framework of the induced dipole theory by fitting the induction energies computed via a fast but accurate MP2/Sadlej-adjusted perturbational approach in a grid of points surrounding the molecule. Particular care is paid in the examination of the atomic quantities obtained from models of implicitly and explicitly interacting polarizabilities. Appropriateness and accuracy of the distributed models are assessed by comparing the molecular polarizabilities recovered from the models and those obtained experimentally and from MP2/Sadlej calculations. The behavior of the models is further explored by computing the polarization energy for aromatic compounds in the context of cation-π interactions and for selected neutral compounds in a TIP3P aqueous environment. The present results suggest that the computational strategy described here constitutes a very effective tool for the development of distributed models of atomic polarizabilities and can be used in the generation of new polarizable force fields.

  11. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  12. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  13. MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.

    ERIC Educational Resources Information Center

    Arabie, Phipps

    1980-01-01

    A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)

  14. MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.

    ERIC Educational Resources Information Center

    Arabie, Phipps

    1980-01-01

    A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)

  15. Predictive models for population performance on real biological fitness landscapes.

    PubMed

    Rowe, William; Wedge, David C; Platt, Mark; Kell, Douglas B; Knowles, Joshua

    2010-09-01

    Directed evolution, in addition to its principal application of obtaining novel biomolecules, offers significant potential as a vehicle for obtaining useful information about the topologies of biomolecular fitness landscapes. In this article, we make use of a special type of model of fitness landscapes-based on finite state machines-which can be inferred from directed evolution experiments. Importantly, the model is constructed only from the fitness data and phylogeny, not sequence or structural information, which is often absent. The model, called a landscape state machine (LSM), has already been used successfully in the evolutionary computation literature to model the landscapes of artificial optimization problems. Here, we use the method for the first time to simulate a biological fitness landscape based on experimental evaluation. We demonstrate in this study that LSMs are capable not only of representing the structure of model fitness landscapes such as NK-landscapes, but also the fitness landscape of real DNA oligomers binding to a protein (allophycocyanin), data we derived from experimental evaluations on microarrays. The LSMs prove adept at modelling the progress of evolution as a function of various controlling parameters, as validated by evaluations on the real landscapes. Specifically, the ability of the model to 'predict' optimal mutation rates and other parameters of the evolution is demonstrated. A modification to the standard LSM also proves accurate at predicting the effects of recombination on the evolution.

  16. Fitting ARMA Time Series by Structural Equation Models.

    ERIC Educational Resources Information Center

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  17. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  18. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  19. Fitting population models from field data

    USGS Publications Warehouse

    Emlen, J.M.; Freeman, D.C.; Kirchhoff, M.D.; Alados, C.L.; Escos, J.; Duda, J.J.

    2003-01-01

    The application of population and community ecology to solving real-world problems requires population and community dynamics models that reflect the myriad patterns of interaction among organisms and between the biotic and physical environments. Appropriate models are not hard to construct, but the experimental manipulations needed to evaluate their defining coefficients are often both time consuming and costly, and sometimes environmentally destructive, as well. In this paper we present an empirical approach for finding the coefficients of broadly inclusive models without the need for environmental manipulation, demonstrate the approach with both an animal and a plant example, and suggest possible applications. Software has been developed, and is available from the senior author, with a manual describing both field and analytic procedures.

  20. A New Tradition To Fit the Model.

    ERIC Educational Resources Information Center

    Darnell, D. Roe; Rosenthal, Donna McCrohan

    2001-01-01

    Discusses Cerro Coso Community College in Ridgecrest (California), where 80-85 of all local jobs are with one employer, the China Lake Naval Air Weapons Station (NAWS). States that massive layoffs at NAWS inspired creative ways of rethinking the community college model at Cerro Coso, such as creating the nation's first computer graphics imagery…

  1. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  2. Akaike information criterion to select well-fit resist models

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  3. Assessing the goodness of fit of personal risk models.

    PubMed

    Gong, Gail; Quante, Anne S; Terry, Mary Beth; Whittemore, Alice S

    2014-08-15

    We describe a flexible family of tests for evaluating the goodness of fit (calibration) of a pre-specified personal risk model to the outcomes observed in a longitudinal cohort. Such evaluation involves using the risk model to assign each subject an absolute risk of developing the outcome within a given time from cohort entry and comparing subjects' assigned risks with their observed outcomes. This comparison involves several issues. For example, subjects followed only for part of the risk period have unknown outcomes. Moreover, existing tests do not reveal the reasons for poor model fit when it occurs, which can reflect misspecification of the model's hazards for the competing risks of outcome development and death. To address these issues, we extend the model-specified hazards for outcome and death, and use score statistics to test the null hypothesis that the extensions are unnecessary. Simulated cohort data applied to risk models whose outcome and mortality hazards agreed and disagreed with those generating the data show that the tests are sensitive to poor model fit, provide insight into the reasons for poor fit, and accommodate a wide range of model misspecification. We illustrate the methods by examining the calibration of two breast cancer risk models as applied to a cohort of participants in the Breast Cancer Family Registry. The methods can be implemented using the Risk Model Assessment Program, an R package freely available at http://stanford.edu/~ggong/rmap/.

  4. A Comprehensive X-Ray Absorption Model for Atomic Oxygen

    NASA Technical Reports Server (NTRS)

    Gorczyca, T. W.; Bautista, M. A.; Hasoglu, M. F.; Garcia, J.; Gatuzz, E.; Kaastra, J. S.; Kallman, T. R.; Manson, S. T.; Mendoza, C.; Raassen, A. J. J.; de Vries, C. P.; Zatsarinny, O.

    2013-01-01

    An analytical formula is developed to accurately represent the photoabsorption cross section of atomic Oxygen for all energies of interest in X-ray spectral modeling. In the vicinity of the K edge, a Rydberg series expression is used to fit R-matrix results, including important orbital relaxation effects, that accurately predict the absorption oscillator strengths below threshold and merge consistently and continuously to the above-threshold cross section. Further, minor adjustments are made to the threshold energies in order to reliably align the atomic Rydberg resonances after consideration of both experimental and observed line positions. At energies far below or above the K-edge region, the formulation is based on both outer- and inner-shell direct photoionization, including significant shake-up and shake-off processes that result in photoionization-excitation and double-photoionization contributions to the total cross section. The ultimate purpose for developing a definitive model for oxygen absorption is to resolve standing discrepancies between the astronomically observed and laboratory-measured line positions, and between the inferred atomic and molecular oxygen abundances in the interstellar medium from XSTAR and SPEX spectral models.

  5. Eigen model with general fitness functions and degradation rates

    NASA Astrophysics Data System (ADS)

    Hu, Chin-Kun; Saakian, David B.

    2006-03-01

    We present an exact solution of Eigen's quasispecies model with a general degradation rate and fitness functions, including a square root decrease of fitness with increasing Hamming distance from the wild type. The found behavior of the model with a degradation rate is analogous to a viral quasi-species under attack by the immune system of the host. Our exact solutions also revise the known results of neutral networks in quasispecies theory. To explain the existence of mutants with large Hamming distances from the wild type, we propose three different modifications of the Eigen model: mutation landscape, multiple adjacent mutations, and frequency-dependent fitness in which the steady state solution shows a multi-center behavior.

  6. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    ERIC Educational Resources Information Center

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  7. Power spectrum analysis with least-squares fitting: Amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers

    NASA Astrophysics Data System (ADS)

    Nørrelykke, Simon F.; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  8. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  9. Quantum model of the Thomson helium atom

    NASA Astrophysics Data System (ADS)

    Kazaryan, E. M.; Shakhnazaryan, V. A.; Sarkisyan, H. A.; Gusev, A. A.

    2014-03-01

    A quantum model of the Thomson helium atom is considered within the framework of stationary perturbation theory. It is shown that from a formal point of view this problem is similar to that of two-electron states in a parabolic quantum dot. The ground state energy of the quantum Thomson helium atom is estimated on the basis of Heisenberg's uncertainty principle. The ground state energies obtained in the first order of perturbation theory and qualitative estimate provide, respectively, upper and lower estimates of eigenvalues derived by numerically solving the problem for a quantum model. The conditions under which the Kohn theorem holds in this system, when the values of resonance absorption frequencies are independent of the Coulomb interaction between electrons, are discussed.

  10. Fitting milk production curves through nonlinear mixed models.

    PubMed

    Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica

    2017-05-01

    The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.

  11. Time-domain fitting of battery electrochemical impedance models

    NASA Astrophysics Data System (ADS)

    Alavi, S. M. M.; Birkl, C. R.; Howey, D. A.

    2015-08-01

    Electrochemical impedance spectroscopy (EIS) is an effective technique for diagnosing the behaviour of electrochemical devices such as batteries and fuel cells, usually by fitting data to an equivalent circuit model (ECM). The common approach in the laboratory is to measure the impedance spectrum of a cell in the frequency domain using a single sine sweep signal, then fit the ECM parameters in the frequency domain. This paper focuses instead on estimation of the ECM parameters directly from time-domain data. This may be advantageous for parameter estimation in practical applications such as automotive systems including battery-powered vehicles, where the data may be heavily corrupted by noise. The proposed methodology is based on the simplified refined instrumental variable for continuous-time fractional systems method ('srivcf'), provided by the Crone toolbox [1,2], combined with gradient-based optimisation to estimate the order of the fractional term in the ECM. The approach was tested first on synthetic data and then on real data measured from a 26650 lithium-ion iron phosphate cell with low-cost equipment. The resulting Nyquist plots from the time-domain fitted models match the impedance spectrum closely (much more accurately than when a Randles model is assumed), and the fitted parameters as separately determined through a laboratory potentiostat with frequency domain fitting match to within 13%.

  12. Evolution in random fitness landscapes: the infinite sites model

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2008-04-01

    We consider the evolution of an asexually reproducing population in an uncorrelated random fitness landscape in the limit of infinite genome size, which implies that each mutation generates a new fitness value drawn from a probability distribution g(w). This is the finite population version of Kingman's house of cards model (Kingman 1978 J. Appl. Probab. 15 1). In contrast to Kingman's work, the focus here is on unbounded distributions g(w) which lead to an indefinite growth of the population fitness. The model is solved analytically in the limit of infinite population size N \\to \\infty and simulated numerically for finite N. When the genome-wide mutation probability U is small, the long-time behavior of the model reduces to a point process of fixation events, which is referred to as a diluted record process (DRP). The DRP is similar to the standard record process except that a new record candidate (a number that exceeds all previous entries in the sequence) is accepted only with a certain probability that depends on the values of the current record and the candidate. We develop a systematic analytic approximation scheme for the DRP. At finite U the fitness frequency distribution of the population decomposes into a stationary part due to mutations and a traveling wave component due to selection, which is shown to imply a reduction of the mean fitness by a factor of 1-U compared to the U \\to 0 limit.

  13. Genome-Wide Heterogeneity of Nucleotide Substitution Model Fit

    PubMed Central

    Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David

    2011-01-01

    At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale. PMID:21824869

  14. Ongoing Processes in a Fitness Network Model under Restricted Resources

    PubMed Central

    Niizato, Takayuki; Gunji, Yukio-Pegio

    2015-01-01

    In real networks, the resources that make up the nodes and edges are finite. This constraint poses a serious problem for network modeling, namely, the compatibility between robustness and efficiency. However, these concepts are generally in conflict with each other. In this study, we propose a new fitness-driven network model for finite resources. In our model, each individual has its own fitness, which it tries to increase. The main assumption in fitness-driven networks is that incomplete estimation of fitness results in a dynamical growing network. By taking into account these internal dynamics, nodes and edges emerge as a result of exchanges between finite resources. We show that our network model exhibits exponential distributions in the in- and out-degree distributions and a power law distribution of edge weights. Furthermore, our network model resolves the trade-off relationship between robustness and efficiency. Our result suggests that growing and anti-growing networks are the result of resolving the trade-off problem itself. PMID:25985301

  15. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  16. [How to fit and interpret multilevel models using SPSS].

    PubMed

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  17. A Green's function quantum average atom model

    DOE PAGES

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  18. A neutrino model fit to the CMB power spectrum

    NASA Astrophysics Data System (ADS)

    Shanks, T.; Johnson, R. W. F.; Schewtschenko, J. A.; Whitbourn, J. R.

    2014-12-01

    The standard cosmological model, Λ cold dark matter (ΛCDM), provides an excellent fit to cosmic microwave background (CMB) data. However, the model has well-known problems. For example, the cosmological constant, Λ, is fine-tuned to 1 part in 10100 and the CDM particle is not yet detected in the laboratory. Shanks previously investigated a model which assumed neither exotic particles nor a cosmological constant but instead postulated a low Hubble constant (H0) to allow a baryon density compatible with inflation and zero spatial curvature. However, recent Planck results make it more difficult to reconcile such a model with CMB power spectra. Here, we relax the previous assumptions to assess the effects of assuming three active neutrinos of mass ≈5 eV. If we assume a low H0 ≈ 45 km s-1 Mpc-1 then, compared to the previous purely baryonic model, we find a significantly improved fit to the first three peaks of the Planck power spectrum. Nevertheless, the goodness of fit is still significantly worse than for ΛCDM and would require appeal to unknown systematic effects for the fit ever to be considered acceptable. A further serious problem is that the amplitude of fluctuations is low (σ8 ≈ 0.2), making it difficult to form galaxies by the present day. This might then require seeds, perhaps from a primordial magnetic field, to be invoked for galaxy formation. These and other problems demonstrate the difficulties faced by models other than ΛCDM in fitting ever more precise cosmological data.

  19. The conical fit approach to modeling ionospheric total electron content

    NASA Technical Reports Server (NTRS)

    Sparks, L.; Komjathy, A.; Mannucci, A. J.

    2002-01-01

    The Global Positioning System (GPS) can be used to measure the integrated electron density along raypaths between satellites and receivers. Such measurements may, in turn, be used to construct regional and global maps of the ionospheric total electron content (TEC). Maps are generated by fitting measurements to an assumed ionospheric model.

  20. Obtaining Predictions from Models Fit to Multiply Imputed Data

    ERIC Educational Resources Information Center

    Miles, Andrew

    2016-01-01

    Obtaining predictions from regression models fit to multiply imputed data can be challenging because treatments of multiple imputation seldom give clear guidance on how predictions can be calculated, and because available software often does not have built-in routines for performing the necessary calculations. This research note reviews how…

  1. Multidimensional Rasch Model Information-Based Fit Index Accuracy

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Wolfe, Edward W.

    2013-01-01

    Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…

  2. An extended aqueous solvation model based on atom-weighted solvent accessible surface areas: SAWSA v2.0 model.

    PubMed

    Hou, Tingjun; Zhang, Wei; Huang, Qin; Xu, Xiaojie

    2005-02-01

    A new method is proposed for calculating aqueous solvation free energy based on atom-weighted solvent accessible surface areas. The method, SAWSA v2.0, gives the aqueous solvation free energy by summing the contributions of component atoms and a correction factor. We applied two different sets of atom typing rules and fitting processes for small organic molecules and proteins, respectively. For small organic molecules, the model classified the atoms in organic molecules into 65 basic types and additionally. For small organic molecules we proposed a correction factor of "hydrophobic carbon" to account for the aggregation of hydrocarbons and compounds with long hydrophobic aliphatic chains. The contributions for each atom type and correction factor were derived by multivariate regression analysis of 379 neutral molecules and 39 ions with known experimental aqueous solvation free energies. Based on the new atom typing rules, the correlation coefficient (r) for fitting the whole neutral organic molecules is 0.984, and the absolute mean error is 0.40 kcal mol(-1), which is much better than those of the model proposed by Wang et al. and the SAWSA model previously proposed by us. Furthermore, the SAWSA v2.0 model was compared with the simple atom-additive model based on the number of atom types (NA). The calculated results show that for small organic molecules, the predictions from the SAWSA v2.0 model are slightly better than those from the atom-additive model based on NA. However, for macromolecules such as proteins, due to the connection between their molecular conformation and their molecular surface area, the atom-additive model based on the number of atom types has little predictive power. In order to investigate the predictive power of our model, a systematic comparison was performed on seven solvation models including SAWSA v2.0, GB/SA_1, GB/SA_2, PB/SA_1, PB/SA_2, AM1/SM5.2R and SM5.0R. The results showed that for organic molecules the SAWSA v2.0 model is better

  3. Ab initio determination of kinetics for atomic layer deposition modeling

    NASA Astrophysics Data System (ADS)

    Remmers, Elizabeth M.

    A first principles model is developed to describe the kinetics of atomic layer deposition (ALD) systems. This model requires no fitting parameters, as it is based on the reaction pathways, structures, and energetics obtained from quantum-chemical studies. Using transition state theory and partition functions from statistical mechanics, equilibrium constants and reaction rates can be calculated. Several tools were created in Python to aid in the calculation of these quantities, and this procedure was applied to two systems- zinc oxide deposition from diethyl zinc (DEZ) and water, and alumina deposition from trimethyl aluminum (TMA) and water. A Gauss-Jordan factorization is used to decompose the system dynamics, and the resulting systems of equations are solved numerically to obtain the temporal concentration profiles of these two deposition systems.

  4. Big Atoms for Small Children: Building Atomic Models from Common Materials to Better Visualize and Conceptualize Atomic Structure

    ERIC Educational Resources Information Center

    Cipolla, Laura; Ferrari, Lia A.

    2016-01-01

    A hands-on approach to introduce the chemical elements and the atomic structure to elementary/middle school students is described. The proposed classroom activity presents Bohr models of atoms using common and inexpensive materials, such as nested plastic balls, colored modeling clay, and small-sized pasta (or small plastic beads).

  5. Big Atoms for Small Children: Building Atomic Models from Common Materials to Better Visualize and Conceptualize Atomic Structure

    ERIC Educational Resources Information Center

    Cipolla, Laura; Ferrari, Lia A.

    2016-01-01

    A hands-on approach to introduce the chemical elements and the atomic structure to elementary/middle school students is described. The proposed classroom activity presents Bohr models of atoms using common and inexpensive materials, such as nested plastic balls, colored modeling clay, and small-sized pasta (or small plastic beads).

  6. Making It Visual: Creating a Model of the Atom

    ERIC Educational Resources Information Center

    Pringle, Rose M.

    2004-01-01

    This article describes a lesson in which students construct Bohr's planetary model of the atom. Niels Bohr's atomic model provides a framework for discussing with middle and high school students the historical development of our understanding of the structure of the atom. The model constructed in this activity will enable students to visualize the…

  7. Making It Visual: Creating a Model of the Atom

    ERIC Educational Resources Information Center

    Pringle, Rose M.

    2004-01-01

    This article describes a lesson in which students construct Bohr's planetary model of the atom. Niels Bohr's atomic model provides a framework for discussing with middle and high school students the historical development of our understanding of the structure of the atom. The model constructed in this activity will enable students to visualize the…

  8. Broadband distortion modeling in Lyman-α forest BAO fitting

    NASA Astrophysics Data System (ADS)

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; Arinyo-i-Prats, Andreu; Busca, Nicolás G.; Miralda-Escudé, Jordi; Slosar, Anže; Font-Ribera, Andreu; Margala, Daniel; Schneider, Donald P.; Vazquez, Jose A.

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift zsimeq 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter bF and the redshift-space distortion parameter βF for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination bF(1+βF) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39+0.11 +0.24 +0.38-0.10 -0.19 -0.28 and bF(1+βF)=-0.374+0.007 +0.013 +0.020-0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.

  9. Assessing the fit of site-occupancy models

    USGS Publications Warehouse

    MacKenzie, D.I.; Bailey, L.L.

    2004-01-01

    Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.

  10. Survival model construction guided by fit and predictive strength.

    PubMed

    Chauvel, Cécile; O'Quigley, John

    2016-10-05

    Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.

  11. Differential equation modeling of HIV viral fitness experiments: model identification, model selection, and multimodel inference.

    PubMed

    Miao, Hongyu; Dykes, Carrie; Demeter, Lisa M; Wu, Hulin

    2009-03-01

    Many biological processes and systems can be described by a set of differential equation (DE) models. However, literature in statistical inference for DE models is very sparse. We propose statistical estimation, model selection, and multimodel averaging methods for HIV viral fitness experiments in vitro that can be described by a set of nonlinear ordinary differential equations (ODE). The parameter identifiability of the ODE models is also addressed. We apply the proposed methods and techniques to experimental data of viral fitness for HIV-1 mutant 103N. We expect that the proposed modeling and inference approaches for the DE models can be widely used for a variety of biomedical studies.

  12. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  13. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  14. A comprehensive X-ray absorption model for atomic oxygen

    SciTech Connect

    Gorczyca, T. W.; Bautista, M. A.; Mendoza, C.; Hasoglu, M. F.; García, J.; Gatuzz, E.; Kaastra, J. S.; Raassen, A. J. J.; De Vries, C. P.; Kallman, T. R.; Manson, S. T.; Zatsarinny, O.

    2013-12-10

    An analytical formula is developed to accurately represent the photoabsorption cross section of O I for all energies of interest in X-ray spectral modeling. In the vicinity of the K edge, a Rydberg series expression is used to fit R-matrix results, including important orbital relaxation effects, that accurately predict the absorption oscillator strengths below threshold and merge consistently and continuously to the above-threshold cross section. Further, minor adjustments are made to the threshold energies in order to reliably align the atomic Rydberg resonances after consideration of both experimental and observed line positions. At energies far below or above the K-edge region, the formulation is based on both outer- and inner-shell direct photoionization, including significant shake-up and shake-off processes that result in photoionization-excitation and double-photoionization contributions to the total cross section. The ultimate purpose for developing a definitive model for oxygen absorption is to resolve standing discrepancies between the astronomically observed and laboratory-measured line positions, and between the inferred atomic and molecular oxygen abundances in the interstellar medium from XSTAR and SPEX spectral models.

  15. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  16. Limitations of model-fitting methods for lensing shear estimation

    NASA Astrophysics Data System (ADS)

    Voigt, L. M.; Bridle, S. L.

    2010-05-01

    Gravitational lensing shear has the potential to be the most powerful tool for constraining the nature of dark energy. However, accurate measurement of galaxy shear is crucial and has been shown to be non-trivial by the Shear TEsting Programme. Here, we demonstrate a fundamental limit to the accuracy achievable by model-fitting techniques, if oversimplistic models are used. We show that even if galaxies have elliptical isophotes, model-fitting methods which assume elliptical isophotes can have significant biases if they use the wrong profile. We use noise-free simulations to show that on allowing sufficient flexibility in the profile the biases can be made negligible. This is no longer the case if elliptical isophote models are used to fit galaxies made up of a bulge plus a disc, if these two components have different ellipticities. The limiting accuracy is dependent on the galaxy shape, but we find the most significant biases (~1 per cent of the shear) for simple spiral-like galaxies. The implications for a given cosmic shear survey will depend on the actual distribution of galaxy morphologies in the Universe, taking into account the survey selection function and the point spread function. However, our results suggest that the impact on cosmic shear results from current and near future surveys may be negligible. Meanwhile, these results should encourage the development of existing approaches which are less sensitive to morphology, as well as methods which use priors on galaxy shapes learnt from deep surveys.

  17. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  18. YUP.SCX: Coaxing Atomic Models into Medium Resolution Electron Density Maps

    PubMed Central

    Tan, Robert K.-Z.; Devkota, Batsal; Harvey, Stephen C.

    2008-01-01

    The structures of large macromolecular complexes in different functional states can be determined by cryo-electron microscopy, which yields electron density maps of low to intermediate resolutions. The maps can be combined with high-resolution atomic structures of components of the complex, to produce a model for the complex that is more accurate than the formal resolution of the map. To this end, methods have been developed to dock atomic models into density maps rigidly or flexibly, and to refine a docked model so as to optimize the fit of the atomic model into the map. We have developed a new refinement method called YUP.SCX. The electron density map is converted into a component of the potential energy function to which terms for stereochemical restraints and volume exclusion are added. The potential energy function is then minimized (using simulated annealing) to yield a stereochemically-restrained atomic structure that fits into the electron density map optimally. We used this procedure to construct an atomic model of the 70S ribosome in the pre-accommodation state. Although some atoms are displaced by as much as 33 Å, they divide themselves into nearly rigid fragments along natural boundaries with smooth transitions between the fragments. PMID:18572416

  19. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    NASA Technical Reports Server (NTRS)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  20. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  1. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    ERIC Educational Resources Information Center

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  2. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    ERIC Educational Resources Information Center

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  3. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  4. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  5. Fitting rainfall interception models to forest ecosystems of Mexico

    NASA Astrophysics Data System (ADS)

    Návar, José

    2017-05-01

    Models that accurately predict forest interception are essential both for water balance studies and for assessing watershed responses to changes in land use and the long-term climate variability. This paper compares the performance of four rainfall interception models-the sparse Gash (1995), Rutter et al. (1975), Liu (1997) and two new models (NvMxa and NvMxb)-using data from four spatially extensive, structurally diverse forest ecosystems in Mexico. Ninety-eight case studies measuring interception in tropical dry (25), arid/semi-arid (29), temperate (26), and tropical montane cloud forests (18) were compiled and analyzed. Coefficients derived from raw data or published statistical relationships were used as model input to evaluate multi-storm forest interception at the case study scale. On average empirical data showed that, tropical montane cloud, temperate, arid/semi-arid and tropical dry forests intercepted 14%, 18%, 22% and 26% of total precipitation, respectively. The models performed well in predicting interception, with mean deviations between measured and modeled interception as a function of total precipitation (ME) generally <5.8% and Nash-Sutcliffe efficiency E estimators >0.66. Model fitting precision was dependent on the forest ecosystem. Arid/semi-arid forests exhibited the smallest, while tropical montane cloud forest displayed the largest ME deviations. Improved agreement between measured and modeled data requires modification of in-storm evaporation rate in the Liu; the canopy storage in the sparse Gash model; and the throughfall coefficient in the Rutter and the NvMx models. This research concludes on recommending the wide application of rainfall interception models with some caution as they provide mixed results. The extensive forest interception data source, the fitting and testing of four models, the introduction of a new model, and the availability of coefficient values for all four forest ecosystems are an important source of information and

  6. Atomic Models for Motional Stark Effects Diagnostics

    SciTech Connect

    Gu, M F; Holcomb, C; Jayakuma, J; Allen, S; Pablant, N A; Burrell, K

    2007-07-26

    We present detailed atomic physics models for motional Stark effects (MSE) diagnostic on magnetic fusion devices. Excitation and ionization cross sections of the hydrogen or deuterium beam traveling in a magnetic field in collisions with electrons, ions, and neutral gas are calculated in the first Born approximation. The density matrices and polarization states of individual Stark-Zeeman components of the Balmer {alpha} line are obtained for both beam into plasma and beam into gas models. A detailed comparison of the model calculations and the MSE polarimetry and spectral intensity measurements obtained at the DIII-D tokamak is carried out. Although our beam into gas models provide a qualitative explanation for the larger {pi}/{sigma} intensity ratios and represent significant improvements over the statistical population models, empirical adjustment factors ranging from 1.0-2.0 must still be applied to individual line intensities to bring the calculations into full agreement with the observations. Nevertheless, we demonstrate that beam into gas measurements can be used successfully as calibration procedures for measuring the magnetic pitch angle through {pi}/{sigma} intensity ratios. The analyses of the filter-scan polarization spectra from the DIII-D MSE polarimetry system indicate unknown channel and time dependent light contaminations in the beam into gas measurements. Such contaminations may be the main reason for the failure of beam into gas calibration on MSE polarimetry systems.

  7. Broadband distortion modeling in Lyman-α forest BAO fitting

    SciTech Connect

    Blomqvist, Michael; Kirkby, David; Margala, Daniel E-mail: dkirkby@uci.edu; and others

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.

  8. Broadband distortion modeling in Lyman-α forest BAO fitting

    SciTech Connect

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; Arinyo-i-Prats, Andreu; Busca, Nicolás G.; Miralda-Escudé, Jordi; Slosar, Anže; Font-Ribera, Andreu; Margala, Daniel; Schneider, Donald P.; Vazquez, Jose A.

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter bF and the redshift-space distortion parameter βF for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination bF(1+βF) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39+0.11 +0.24 +0.38-0.10 -0.19 -0.28 and bF(1+βF)=-0.374+0.007 +0.013 +0.020-0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.

  9. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE PAGES

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter bF and the redshift-space distortion parameter βF for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination bF(1+βF) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39+0.11 +0.24 +0.38-0.10 -0.19 -0.28 and bF(1+βF)=-0.374+0.007 +0.013 +0.020-0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  10. General model of depolarization and transfer of polarization of singly ionized atoms by collisions with hydrogen atoms

    NASA Astrophysics Data System (ADS)

    Derouich, M.

    2017-02-01

    Simulations of the generation of the atomic polarization is necessary for interpreting the second solar spectrum. For this purpose, it is important to rigorously determine the effects of the isotropic collisions with neutral hydrogen on the atomic polarization of the neutral atoms, ionized atoms and molecules. Our aim is to treat in generality the problem of depolarizing isotropic collisions between singly ionized atoms and neutral hydrogen in its ground state. Using our numerical code, we computed the collisional depolarization rates of the p-levels of ions for large number of values of the effective principal quantum number n* and the Unsöld energy Ep. Then, genetic programming has been utilized to fit the available depolarization rates. As a result, strongly non-linear relationships between the collisional depolarization rates, n* and Ep are obtained, and are shown to reproduce the original data with accuracy clearly better than 10%. These relationships allow quick calculations of the depolarizing collisional rates of any simple ion which is very useful for the solar physics community. In addition, the depolarization rates associated to the complex ions and to the hyperfine levels can be easily derived from our results. In this work we have shown that by using powerful numerical approach and our collisional method, general model giving the depolarization of the ions can be obtained to be exploited for solar applications.

  11. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  12. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  13. A stochastic carcinogenesis model incorporating genomic instability fitted to colon cancer data.

    PubMed

    Little, M P; Wright, E G

    2003-06-01

    A generalization of the two-mutation stochastic carcinogenesis model of Moolgavkar, Venzon and Knudson and certain models constructed by Little is developed; the model incorporates progressive genomic instability and an arbitrary number of mutational stages. This model is shown to have the property that, at least in the case when the parameters of the model are eventually constant, the excess relative and absolute cancer rates following changes in any of the parameters will eventually tend to zero. It is also shown that when the parameters governing the processes of cell division, death, or additional mutation (whether of the normal sort or that resulting in genomic destabilization) at the penultimate stage are subject to perturbations, there are relatively large fluctuations in the hazard function for the model, which start almost as soon as the parameters are changed. The model is fitted to US Caucasian colon cancer incidence data. A model with five stages and two levels of genomic destabilization fits the data well. Comparison with patterns of excess risk in the Japanese atomic bomb survivor colon cancer incidence data indicate that radiation might act on early mutation rates in the model; a major role for radiation in initiating genomic destabilization is less likely.

  14. Quantitative model for the heterogeneity of atomic position fluctuations in proteins: A simulation study

    SciTech Connect

    Kneller, Gerald R.; Hinsen, Konrad

    2009-07-28

    We propose a simple analytical model for the elastic incoherent structure factor of proteins measured by neutron scattering, which allows extracting the distribution of atomic position fluctuations from a fit of the model to the experimental data. The method is validated by applying it to elastic incoherent structure factors of lysozyme which have been obtained by molecular dynamics simulation and by normal mode analysis, respectively, and for which distributions of the atomic position fluctuations can be generated numerically for direct comparison with the predictions of the model. The comparison shows a remarkable agreement, in particular, concerning the lower limit for the position fluctuations, which is pronounced in the numerical data.

  15. Chempy: A flexible chemical evolution model for abundance fitting

    NASA Astrophysics Data System (ADS)

    Rybizki, J.; Just, A.; Rix, H.-W.; Fouesneau, M.

    2017-02-01

    Chempy models Galactic chemical evolution (GCE); it is a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of 5-10 parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: e.g. the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF) and the incidence of supernova of type Ia (SN Ia). Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets, performing essentially as a chemical evolution fitting tool. Chempy can be used to confront predictions from stellar nucleosynthesis with complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  16. Equilibrium Distribution of Mutators in the Single Fitness Peak Model

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Emmanuel; Deeds, Eric J.; Shakhnovich, Eugene I.

    2003-09-01

    This Letter develops an analytically tractable model for determining the equilibrium distribution of mismatch repair deficient strains in unicellular populations. The approach is based on the single fitness peak model, which has been used in Eigen’s quasispecies equations in order to understand various aspects of evolutionary dynamics. As with the quasispecies model, our model for mutator-nonmutator equilibrium undergoes a phase transition in the limit of infinite sequence length. This “repair catas­trophe” occurs at a critical repair error probability of ɛr=Lvia/L, where Lvia denotes the length of the genome controlling viability, while L denotes the overall length of the genome. The repair catastrophe therefore occurs when the repair error probability exceeds the fraction of deleterious mutations. Our model also gives a quantitative estimate for the equilibrium fraction of mutators in Escherichia coli.

  17. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    PubMed

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns.

  18. Atomic force microscopy of model lipid membranes.

    PubMed

    Morandat, Sandrine; Azouzi, Slim; Beauvais, Estelle; Mastouri, Amira; El Kirat, Karim

    2013-02-01

    Supported lipid bilayers (SLBs) are biomimetic model systems that are now widely used to address the biophysical and biochemical properties of biological membranes. Two main methods are usually employed to form SLBs: the transfer of two successive monolayers by Langmuir-Blodgett or Langmuir-Schaefer techniques, and the fusion of preformed lipid vesicles. The transfer of lipid films on flat solid substrates offers the possibility to apply a wide range of surface analytical techniques that are very sensitive. Among them, atomic force microscopy (AFM) has opened new opportunities for determining the nanoscale organization of SLBs under physiological conditions. In this review, we first focus on the different protocols generally employed to prepare SLBs. Then, we describe AFM studies on the nanoscale lateral organization and mechanical properties of SLBs. Lastly, we survey recent developments in the AFM monitoring of bilayer alteration, remodeling, or digestion, by incubation with exogenous agents such as drugs, proteins, peptides, and nanoparticles.

  19. Project Physics Text 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Basic atomic theories are presented in this fifth unit of the Project Physics text for use by senior high students. Chemical basis of atomic models in the early years of the 18th Century is discussed n connection with Dalton's theory, atomic properties, and periodic tables. The discovery of electrons is described by using cathode rays, Millikan's…

  20. Operation of the computer model for microenvironment atomic oxygen exposure

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  1. Rapid world modeling: Fitting range data to geometric primitives

    SciTech Connect

    Feddema, J.; Little, C.

    1996-12-31

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE`s waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data.

  2. Generalized least-squares fit of multiequation models

    NASA Astrophysics Data System (ADS)

    Marshall, Simon L.; Blencoe, James G.

    2005-01-01

    A method for fitting multiequation models to data sets of finite precision is proposed. This is based on the Gauss-Newton algorithm devised by Britt and Luecke (1973); the inclusion of several equations of condition to be satisfied at each data point results in a block diagonal form for the effective weighting matrix. This method allows generalized nonlinear least-squares fitting of functions that are more easily represented in the parametric form (x(t),y(t)) than as an explicit functional relationship of the form y=f(x). The Aitken (1935) formulas appropriate to multiequation weighted nonlinear least squares are recovered in the limiting case where the variances and covariances of the independent variables are zero. Practical considerations relevant to the performance of such calculations, such as the evaluation of the required partial derivatives and matrix products, are discussed in detail, and the operation of the algorithm is illustrated by applying it to the fit of complex permittivity data to the Debye equation.

  3. An atomic model for neutral and singly ionized uranium

    NASA Technical Reports Server (NTRS)

    Maceda, E. L.; Miley, G. H.

    1979-01-01

    A model for the atomic levels above ground state in neutral, U(0), and singly ionized, U(+), uranium is described based on identified atomic transitions. Some 168 states in U(0) and 95 in U(+) are found. A total of 1581 atomic transitions are used to complete this process. Also discussed are the atomic inverse lifetimes and line widths for the radiative transitions as well as the electron collisional cross sections.

  4. An atomic model for neutral and singly ionized uranium

    NASA Technical Reports Server (NTRS)

    Maceda, E. L.; Miley, G. H.

    1979-01-01

    A model for the atomic levels above ground state in neutral, U(0), and singly ionized, U(+), uranium is described based on identified atomic transitions. Some 168 states in U(0) and 95 in U(+) are found. A total of 1581 atomic transitions are used to complete this process. Also discussed are the atomic inverse lifetimes and line widths for the radiative transitions as well as the electron collisional cross sections.

  5. Molecular-orbital model for slow hollow atoms colliding with atoms in a solid

    SciTech Connect

    Arnau, A.; Koehrbrueck, R.; Grether, M.; Spieler, A.; Stolterfoht, N.

    1995-05-01

    A model that has previously been used to calculate the molecular orbitals in atomic collisions between neutral atoms and ions is extended to describe hollow atoms colliding with a solid. The energy levels and screening functions are obtained from density-functional calculations. The results show that the inner-shell holes in the hollow projectile, as well as the screening cloud within the solid, create important effects that are essential for the description of the interaction of multicharged ions with solids.

  6. Atomic Oscillator Strengths for Stellar Atmosphere Modeling

    NASA Astrophysics Data System (ADS)

    Ruffoni, Matthew; Pickering, Juliet C.

    2015-08-01

    In order to correctly model stellar atmospheres, fundamental atomic data must be available to describe atomic lines observed in their spectra. Accurate, laboratory-measured oscillator strengths (f-values) for Fe peak elements in neutral or low-ionisation states are particularly important for determining chemical abundances.However, advances in astronomical spectroscopy in recent decades have outpaced those in laboratory astrophysics, with the latter frequently being overlooked at the planning stages of new projects. As a result, numerous big-budget astronomy projects have been, and continue to be hindered by a lack of suitable, accurately-measured reference data to permit the analysis of expensive astronomical spectra; a problem only likely to worsen in the coming decades as spectrographs at new facilities increasingly move to infrared wavelengths.At Imperial College London - and in collaboration with NIST, Wisconsin University and Lund University - we have been working with the astronomy community in an effort to provide new accurately-measured f-values for a range of projects. In particular, we have been working closely with the Gaia-ESO (GES) and SDSS-III/APOGEE surveys, both of which have discovered that many lines that would make ideal candidates for inclusion in their analyses have poorly defined f-values, or are simply absent from the database. Using high-resolution Fourier transform spectroscopy (R ~ 2,000,000) to provide atomic branching fractions, and combining these with level lifetimes measured with laser induced fluorescence, we have provided new laboratory-measured f-values for a range of Fe-peak elements, most recently including Fe I, Fe II, and V I. For strong, unblended lines, uncertainties are as low as ±0.02 dex.In this presentation, I will describe how experimental f-values are obtained in the laboratory and present our recent work for GES and APOGEE. In particular, I will also discuss the strengths and limitations of current laboratory

  7. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment.

    PubMed

    Joseph, Agnel Praveen; Malhotra, Sony; Burnley, Tom; Wood, Chris; Clare, Daniel K; Winn, Martyn; Topf, Maya

    2016-05-01

    As the resolutions of Three Dimensional Electron Microscopic reconstructions of biological macromolecules are being improved, there is a need for better fitting and refinement methods at high resolutions and robust approaches for model assessment. Flex-EM/MODELLER has been used for flexible fitting of atomic models in intermediate-to-low resolution density maps of different biological systems. Here, we demonstrate the suitability of the method to successfully refine structures at higher resolutions (2.5-4.5Å) using both simulated and experimental data, including a newly processed map of Apo-GroEL. A hierarchical refinement protocol was adopted where the rigid body definitions are relaxed and atom displacement steps are reduced progressively at successive stages of refinement. For the assessment of local fit, we used the SMOC (segment-based Manders' overlap coefficient) score, while the model quality was checked using the Qmean score. Comparison of SMOC profiles at different stages of refinement helped in detecting regions that are poorly fitted. We also show how initial model errors can have significant impact on the goodness-of-fit. Finally, we discuss the implementation of Flex-EM in the CCP-EM software suite. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment

    PubMed Central

    Joseph, Agnel Praveen; Malhotra, Sony; Burnley, Tom; Wood, Chris; Clare, Daniel K.; Winn, Martyn; Topf, Maya

    2016-01-01

    As the resolutions of Three Dimensional Electron Microscopic reconstructions of biological macromolecules are being improved, there is a need for better fitting and refinement methods at high resolutions and robust approaches for model assessment. Flex-EM/MODELLER has been used for flexible fitting of atomic models in intermediate-to-low resolution density maps of different biological systems. Here, we demonstrate the suitability of the method to successfully refine structures at higher resolutions (2.5–4.5 Å) using both simulated and experimental data, including a newly processed map of Apo-GroEL. A hierarchical refinement protocol was adopted where the rigid body definitions are relaxed and atom displacement steps are reduced progressively at successive stages of refinement. For the assessment of local fit, we used the SMOC (segment-based Manders’ overlap coefficient) score, while the model quality was checked using the Qmean score. Comparison of SMOC profiles at different stages of refinement helped in detecting regions that are poorly fitted. We also show how initial model errors can have significant impact on the goodness-of-fit. Finally, we discuss the implementation of Flex-EM in the CCP-EM software suite. PMID:26988127

  9. Effect of the Number of Variables on Measures of Fit in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Kenny, David A.; McCoach, D. Betsy

    2003-01-01

    Used three approaches to understand the effect of the number of variables in the model on model fit in structural equation modeling through computer simulation. Developed a simple formula for the theoretical value of the comparative fit index. (SLD)

  10. Issues in Evaluating Model Fit With Missing Data

    ERIC Educational Resources Information Center

    Davey, Adam

    2005-01-01

    Effects of incomplete data on fit indexes remain relatively unexplored. We evaluate a wide set of fit indexes (?[squared], root mean squared error of appproximation, Normed Fit Index [NFI], Tucker-Lewis Index, comparative fit index, gamma-hat, and McDonald's Centrality Index) varying conditions of sample size (100-1,000 in increments of 50),…

  11. Assessing Model Data Fit of Unidimensional Item Response Theory Models in Simulated Data

    ERIC Educational Resources Information Center

    Kose, Ibrahim Alper

    2014-01-01

    The purpose of this paper is to give an example of how to assess the model-data fit of unidimensional IRT models in simulated data. Also, the present research aims to explain the importance of fit and the consequences of misfit by using simulated data sets. Responses of 1000 examinees to a dichotomously scoring 20 item test were simulated with 25…

  12. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  13. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  14. A Random Number Model for Beer's Law-Atom Shadowing

    NASA Astrophysics Data System (ADS)

    Daniels, R. Scott

    1999-01-01

    A random-number corpuscular-theory-of-light model for teaching Beer's law is presented. In this model, atoms are considered to have photon-capture cross-sectional areas and to exist in some finite volume. Where by chance one atom lies directly behind another, the first atom is said to cast a shadow on the second, thereby preventing the second atom from participating in the attenuation of radiation at that instant. This model not only produces the linear Beer's law relationship, but it also provides a simple and visual model from which the law can be demonstrated with the use of a computer-spreadsheet random number generator.

  15. Improved atomic model for charge transfer in multielectron ion-atom collisions at intermediate energies

    NASA Astrophysics Data System (ADS)

    Lin, C. D.; Tunnell, L. N.

    1980-07-01

    Electron capture to the K shell of projectiles from the K and other subshells of multielectron target atoms is studied in the intermediate energy region using the single-active-electron approximation and the two-state, two-center atomic eigenfunction expansion method. It is concluded that the theoretical capture cross section is not sensitive to the atomic models used at high collision energies where the projectile velocity v is near or greater than the orbital velocity ve of the active electron. For vatomic potential such as the Herman-Skillman potential is needed to represent the target atom. The insufficiency of various simple Coulomb model potentials is illustrated. Capture cross sections for a few collision systems are obtained and compared with experimental data when available to illustrate the reliability of the present model.

  16. Effect of energetic oxygen atoms on neutral density models.

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, R. P.; Nisbet, J. S.

    1973-01-01

    The dissociative recombination of O2(+) and NO(+) in the F region results in the production of atomic oxygen and atomic nitrogen with substantially greater kinetic energy than the ambient atoms. In the exosphere these energetic atoms have long free paths. They can ascend to altitudes of several thousand kilometers and can travel horizontally to distances of the order of the earth's radius. The distribution of energetic oxygen atoms is derived by means of models of the ion and neutral densities for quiet and disturbed solar conditions. A distribution technique is used to study the motion of the atoms in the collision-dominated region. Ballistic trajectories are calculated in the spherical gravitational field of the earth. The present calculations show that the number densities of energetic oxygen atoms predominate over the ambient atomic oxygen densities above 1000 km under quiet solar conditions and above 1600 km under disturbed solar conditions.

  17. Empirical fitness models for hepatitis C virus immunogen design

    NASA Astrophysics Data System (ADS)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  18. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  19. Atomic Forces for Geometry-Dependent Point Multipole and Gaussian Multipole Models

    PubMed Central

    Elking, Dennis M.; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G.

    2010-01-01

    In standard treatments of atomic multipole models, interaction energies, total molecular forces, and total molecular torques are given for multipolar interactions between rigid molecules. However, if the molecules are assumed to be flexible, two additional multipolar atomic forces arise due to 1) the transfer of torque between neighboring atoms, and 2) the dependence of multipole moment on internal geometry (bond lengths, bond angles, etc.) for geometry-dependent multipole models. In the current study, atomic force expressions for geometry-dependent multipoles are presented for use in simulations of flexible molecules. The atomic forces are derived by first proposing a new general expression for Wigner function derivatives ∂Dlm′m/∂Ω. The force equations can be applied to electrostatic models based on atomic point multipoles or Gaussian multipole charge density. Hydrogen bonded dimers are used to test the inter-molecular electrostatic energies and atomic forces calculated by geometry-dependent multipoles fit to the ab initio electrostatic potential (ESP). The electrostatic energies and forces are compared to their reference ab initio values. It is shown that both static and geometry-dependent multipole models are able to reproduce total molecular forces and torques with respect to ab initio, while geometry-dependent multipoles are needed to reproduce ab initio atomic forces. The expressions for atomic force can be used in simulations of flexible molecules with atomic multipoles. In addition, the results presented in this work should lead to further development of next generation force fields composed of geometry-dependent multipole models. PMID:20839297

  20. Reliable measurements of interfacial slip by colloid probe atomic force microscopy. I. Mathematical modeling.

    PubMed

    Zhu, Liwen; Attard, Phil; Neto, Chiara

    2011-06-07

    We developed a stable spread-sheet algorithm for the calculation of the hydrodynamic forces measured by colloid probe atomic force microscopy to be used in investigations of interfacial slip. The algorithm quantifies the effect on the slip hydrodynamic force for factors commonly encountered in experimental measurements such as nanoparticle contamination, nonconstant drag force due to cantilever bending that varies with different cantilevers, flattening of the microsphere, and calibration at large separations. We found that all of these experimental factors significantly affect the fitted slip length, approximately in the order listed. Our modeling is applied to fit new experimental data reproducibly. Using this new algorithm, it is shown that the fitting of hydrodynamic theories to experimental data is reliable and the fitted slip length is accurate. A "blind test" protocol was developed that produces a reliable estimate of the fitting error in the determination of both the slip length and spring constant. By this blind test, we estimate that our modeling determines the fitted slip length with an average systematic error of 2 nm and the fitted spring constant with a 3% error. Our exact calculation of the drag force may explain previous reports that the fitted slip length depends upon the shape and spring constant of the cantilever used to perform the measurements.

  1. The FIT Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros

    2010-09-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  2. [Remote fitting models analysis of hearing AIDS from primary hospitals: 45 case reports].

    PubMed

    Wang, Fuqiang; Zhai, Liping; Li, Letian

    2016-01-01

    To study the feasibility and the generalizability of the Remote fitting models of hearing AIDS from primary hospitals. we comparative analyzed the speech recognition scores and satisfaction of 45 cases with traditional hearing AID fitting and with a hearing aid remote test respectively. 45 cases were analyzed in each group, including traditional hearing AID fitting model and remote test, and 35 recovered in traditional fitting model group, and the recovery rate was 77.8%; Remote fitting model rehabilitation 42 cases, recovery rate was 93.3%, the difference was statistically significant (P < 0.05). In 6 weekend wear hearing AIDS, traditional fitting model of speech recognition rate increased by 19.40% on average, the average distance fitting model speech recognition rate increases by 27.47%, the average distance fitting model than traditional fitting the speech recognition rate increased significantly more (8.07%). Effect of hearing aid international questionnaire results suggest: 45 cases using traditional model fitting hearing AIDS patients, 33 cases (73.3%) satisfaction, 12 cases (26.7%) patients after use. Remote and 45 cases of using hearing AIDS fitting model, satisfied with 40 cases (88.9%), 5 cases (11.1%) patients after use. the curative effect and the satisfaction of remote fitting models of hearing AIDS on hearing impairment are better than that in patients with traditional fitting models. Therefore it is more worthy of clinical application especially in basic level hospitals.

  3. Using R^2 to compare least-squares fit models: When it must fail

    USDA-ARS?s Scientific Manuscript database

    R^2 can be used correctly to select from among competing least-squares fit models when the data are fitted in common form and with common weighting. However, then R^2 comparisons become equivalent to comparisons of the estimated fit variance s^2 in unweighted fitting, or of the reduced chi-square in...

  4. INFERNO - A better model of atoms in dense plasmas

    NASA Astrophysics Data System (ADS)

    Liberman, D. A.

    1982-03-01

    A self-consistent field model of atoms in dense plasmas has been devised and incorporated in a computer program. In the model there is a uniform positive charge distribution with a hole in it and at the center of the hole an atomic nucleus. There are electrons, in both bound and continuum states, in sufficient number to form an electrically neutral system. The Dirac equation is used so that high Z atoms can be dealt with. A finite temperature is assumed, and a mean field (average atom) approximation is used in statistical averages. Applications have been made to equations of states and to photoabsorption.

  5. Assessing Fit of Latent Regression Models. Research Report. ETS RR-09-50

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Guo, Zhumei; von Davier, Matthias; Veldkamp, Bernard P.

    2009-01-01

    The reporting methods used in large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) rely on a "latent regression model". There is a lack of research on the assessment of fit of latent regression models. This paper suggests a simulation-based model-fit technique to assess the fit of such…

  6. Direct model fitting to combine dithered ACS images

    NASA Astrophysics Data System (ADS)

    Mahmoudian, H.; Wucknitz, O.

    2013-08-01

    The information lost in images of undersampled CCD cameras can be recovered with the technique of "dithering". A number of subexposures is taken with sub-pixel shifts in order to record structures on scales smaller than a pixel. The standard method to combine such exposures, "Drizzle", averages after reversing the displacements, including rotations and distortions. More sophisticated methods are available to produce, e.g., Nyquist sampled representations of band-limited inputs. While the combined images produced by these methods can be of high quality, their use as input for forward-modelling techniques in gravitational lensing is still not optimal, because the residual artefacts still affect the modelling results in unpredictable ways. In this paper we argue for an overall modelling approach that takes into account the dithering and the lensing without the intermediate product of a combined image. As one building block we introduce an alternative approach to combine dithered images by direct model fitting with a least-squares approach including a regularization constraint. We present tests with simulated and real data that show the quality of the results. The additional effects of gravitational lensing and the convolution with an instrumental point spread function can be included in a natural way, avoiding the possible systematic errors of previous procedures.

  7. The Quantum Atomic Model "Electronium": A Successful Teaching Tool.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Focuses on the quantum atomic model Electronium. Outlines the Bremen teaching approach in which this model is used, and analyzes the learning of two students as they progress through the teaching unit. (Author/MM)

  8. Proposed reference models for atomic oxygen in the terrestrial atmosphere

    NASA Technical Reports Server (NTRS)

    Llewellyn, E. J.; Mcdade, I. C.; Lockerbie, M. D.

    1989-01-01

    A provisional Atomic Oxygen Reference model was derived from average monthly ozone profiles and the MSIS-86 reference model atmosphere. The concentrations are presented in tabular form for the altitude range 40 to 130 km.

  9. "Piekara's Chair": Mechanical Model for Atomic Energy Levels.

    ERIC Educational Resources Information Center

    Golab-Meyer, Zofia

    1991-01-01

    Uses the teaching method of models or analogies, specifically the model called "Piekara's chair," to show how teaching classical mechanics can familiarize students with the notion of energy levels in atomic physics. (MDH)

  10. The Quantum Atomic Model "Electronium": A Successful Teaching Tool.

    ERIC Educational Resources Information Center

    Budde, Marion; Niedderer, Hans; Scott, Philip; Leach, John

    2002-01-01

    Focuses on the quantum atomic model Electronium. Outlines the Bremen teaching approach in which this model is used, and analyzes the learning of two students as they progress through the teaching unit. (Author/MM)

  11. "Piekara's Chair": Mechanical Model for Atomic Energy Levels.

    ERIC Educational Resources Information Center

    Golab-Meyer, Zofia

    1991-01-01

    Uses the teaching method of models or analogies, specifically the model called "Piekara's chair," to show how teaching classical mechanics can familiarize students with the notion of energy levels in atomic physics. (MDH)

  12. Early atomic models - from mechanical to quantum (1904-1913)

    NASA Astrophysics Data System (ADS)

    Baily, C.

    2013-01-01

    A complete history of early atomic models would fill volumes, but a reasonably coherent tale of the path from mechanical atoms to the quantum can be told by focusing on the relevant work of three great contributors to atomic physics, in the critically important years between 1904 and 1913: J.J. Thomson, Ernest Rutherford and Niels Bohr. We first examine the origins of Thomson's mechanical atomic models, from his ethereal vortex atoms in the early 1880's, to the myriad "corpuscular" atoms he proposed following the discovery of the electron in 1897. Beyond qualitative predictions for the periodicity of the elements, the application of Thomson's atoms to problems in scattering and absorption led to quantitative predictions that were confirmed by experiments with high-velocity electrons traversing thin sheets of metal. Still, the much more massive and energetic α-particles being studied by Rutherford were better suited for exploring the interior of the atom, and careful measurements on the angular dependence of their scattering eventually allowed him to infer the existence of an atomic nucleus. Niels Bohr was particularly troubled by the radiative instability inherent to any mechanical atom, and succeeded in 1913 where others had failed in the prediction of emission spectra, by making two bold hypotheses that were in contradiction to the laws of classical physics, but necessary in order to account for experimental facts.

  13. Early Atomic Models - From Mechanical to Quantum (1904-1913)

    NASA Astrophysics Data System (ADS)

    Baily, Charles

    2012-08-01

    A complete history of early atomic models would fill volumes, but a reasonably coherent tale of the path from mechanical atoms to the quantum can be told by focusing on the relevant work of three great contributors to atomic physics, in the critically important years between 1904 and 1913: J. J. Thomson, Ernest Rutherford and Niels Bohr. We first examine the origins of Thomson's mechanical atomic models, from his ethereal vortex atoms in the early 1880's, to the myriad "corpuscular" atoms he proposed following the discovery of the electron in 1897. Beyond predictions for the periodicity of the elements, the application of Thomson's atoms to problems in scattering and absorption led to quantitative predictions that were confirmed by experiments with high-velocity electrons traversing thin sheets of metal. Still, the much more massive and energetic {\\alpha}-particles being studied by Rutherford were better suited for exploring the interior of the atom, and careful measurements on the angular dependence of their scattering eventually allowed him to infer the existence of an atomic nucleus. Niels Bohr was particularly troubled by the radiative instability inherent to any mechanical atom, and succeeded in 1913 where others had failed in the prediction of emission spectra, by making two bold hypotheses that were in contradiction to the laws of classical physics, but necessary in order to account for experimental facts.

  14. Analytical model of an isolated single-atom electron source.

    PubMed

    Engelen, W J; Vredenbregt, E J D; Luiten, O J

    2014-12-01

    An analytical model of a single-atom electron source is presented, where electrons are created by near-threshold photoionization of an isolated atom. The model considers the classical dynamics of the electron just after the photon absorption, i.e. its motion in the potential of a singly charged ion and a uniform electric field used for acceleration. From closed expressions for the asymptotic transverse electron velocities and trajectories, the effective source temperature and the virtual source size can be calculated. The influence of the acceleration field strength and the ionization laser energy on these properties has been studied. With this model, a single-atom electron source with the optimum electron beam properties can be designed. Furthermore, we show that the model is also applicable to ionization of rubidium atoms, and thus also describes the ultracold electron source, which is based on photoionization of laser-cooled alkali atoms.

  15. Modeling Percentile Rank of Cardiorespiratory Fitness Across the Lifespan

    PubMed Central

    Graves, Rasinio S.; Mahnken, Jonathan D.; Perea, Rodrigo D.; Billinger, Sandra A.; Vidoni, Eric D.

    2016-01-01

    Purpose The purpose of this investigation was to create an equation for continuous percentile rank of maximal oxygen consumption (VO2 max) from ages 20 to 99. Methods We used a two-staged modeling approach with existing normative data from the American College of Sports Medicine for VO2 max. First, we estimated intercept and slope parameters for each decade of life as a logistic function. We then modeled change in intercept and slope as functions of age (stage two) using weighted least squares regression. The resulting equations were used to predict fitness percentile rank based on age, sex, and VO2 max, and included estimates for individuals beyond 79 years old. Results We created a continuous, sex specific model of VO2 max percentile rank across the lifespan. Conclusions Percentile ranking of VO2 max can be made continuous and account for adults aged 20 to 99 with reasonable accuracy, improving the utility of this normalization procedure in practical and research settings, particularly in aging populations. PMID:26778922

  16. Developing Models: What is the Atom Really Like?

    ERIC Educational Resources Information Center

    Records, Roger M.

    1982-01-01

    Five atomic theory activities feasible for high school students to perform are described based on the following models: (1) Dalton's Uniform Sphere Model; (2) Thomson's Raisin Pudding Model; (3) Rutherford's Nuclear Model; (4) Bohr's Energy Level Model, and (5) Orbital Model from quantum mechanics. (SK)

  17. Developing Models: What is the Atom Really Like?

    ERIC Educational Resources Information Center

    Records, Roger M.

    1982-01-01

    Five atomic theory activities feasible for high school students to perform are described based on the following models: (1) Dalton's Uniform Sphere Model; (2) Thomson's Raisin Pudding Model; (3) Rutherford's Nuclear Model; (4) Bohr's Energy Level Model, and (5) Orbital Model from quantum mechanics. (SK)

  18. Methodical fitting for mathematical models of rubber-like materials

    NASA Astrophysics Data System (ADS)

    Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne

    2017-02-01

    A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.

  19. A Comparison of Four Estimators of a Population Measure of Model Fit in Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Zhang, Wei

    2008-01-01

    A major issue in the utilization of covariance structure analysis is model fit evaluation. Recent years have witnessed increasing interest in various test statistics and so-called fit indexes, most of which are actually based on or closely related to F[subscript 0], a measure of model fit in the population. This study aims to provide a systematic…

  20. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  1. A Comparison of Four Estimators of a Population Measure of Model Fit in Covariance Structure Analysis

    ERIC Educational Resources Information Center

    Zhang, Wei

    2008-01-01

    A major issue in the utilization of covariance structure analysis is model fit evaluation. Recent years have witnessed increasing interest in various test statistics and so-called fit indexes, most of which are actually based on or closely related to F[subscript 0], a measure of model fit in the population. This study aims to provide a systematic…

  2. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  3. Project Physics Tests 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 5 are presented in this booklet. Included are 70 multiple-choice and 23 problem-and-essay questions. Concepts of atomic model are examined on aspects of relativistic corrections, electron emission, photoelectric effects, Compton effect, quantum theories, electrolysis experiments, atomic number and mass,…

  4. 100th anniversary of Bohr's model of the atom.

    PubMed

    Schwarz, W H Eugen

    2013-11-18

    In the fall of 1913 Niels Bohr formulated his atomic models at the age of 27. This Essay traces Bohr's fundamental reasoning regarding atomic structure and spectra, the periodic table of the elements, and chemical bonding. His enduring insights and superseded suppositions are also discussed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Analytical model of atomic-force-microscopy force curves in viscoelastic materials exhibiting power law relaxation

    NASA Astrophysics Data System (ADS)

    de Sousa, J. S.; Santos, J. A. C.; Barros, E. B.; Alencar, L. M. R.; Cruz, W. T.; Ramos, M. V.; Mendes Filho, J.

    2017-01-01

    We propose an analytical model for the force-indentation relationship in viscoelastic materials exhibiting a power law relaxation described by an exponent n, where n = 1 represents the standard viscoelastic solid (SLS) model and n < 1 represents a fractional SLS model. To validate the model, we perform nanoindentation measurements of polyacrylamide gels with atomic force microscopy (AFM) force curves. We found exponents n < 1 that depend on the bisacrylamide concentration. We also demonstrate that the fitting of AFM force curves for varying load speeds can reproduce the dynamic viscoelastic properties of those gels measured with dynamic force modulation methods.

  6. Model-independent fit to Planck and BICEP2 data

    NASA Astrophysics Data System (ADS)

    Barranco, Laura; Boubekeur, Lotfi; Mena, Olga

    2014-09-01

    Inflation is the leading theory to describe elegantly the initial conditions that led to structure formation in our Universe. In this paper, we present a novel phenomenological fit to the Planck, WMAP polarization (WP) and the BICEP2 data sets using an alternative parametrization. Instead of starting from inflationary potentials and computing the inflationary observables, we use a phenomenological parametrization due to Mukhanov, describing inflation by an effective equation of state, in terms of the number of e-folds and two phenomenological parameters α and β. Within such a parametrization, which captures the different inflationary models in a model-independent way, the values of the scalar spectral index ns, its running and the tensor-to-scalar ratio r are predicted, given a set of parameters (α ,β). We perform a Markov Chain Monte Carlo analysis of these parameters, and we show that the combined analysis of Planck and WP data favors the Starobinsky and Higgs inflation scenarios. Assuming that the BICEP2 signal is not entirely due to foregrounds, the addition of this last data set prefers instead the ϕ2 chaotic models. The constraint we get from Planck and WP data alone on the derived tensor-to-scalar ratio is r <0.18 at 95% C.L., value which is consistent with the one quoted from the BICEP2 Collaboration analysis, r =0.16-0.05+0-06, after foreground subtraction. This is not necessarily at odds with the 2σ tension found between Planck and BICEP2 measurements when analyzing data in terms of the usual ns and r parameters, given that the parametrization used here, for the preferred value ns≃0.96, allows only for a restricted parameter space in the usual (ns,r) plane.

  7. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  8. Molecule-specific determination of atomic polarizabilities with the polarizable atomic multipole model.

    PubMed

    Woo Kim, Hyun; Rhee, Young Min

    2012-07-30

    Recently, many polarizable force fields have been devised to describe induction effects between molecules. In popular polarizable models based on induced dipole moments, atomic polarizabilities are the essential parameters and should be derived carefully. Here, we present a parameterization scheme for atomic polarizabilities using a minimization target function containing both molecular and atomic information. The main idea is to adopt reference data only from quantum chemical calculations, to perform atomic polarizability parameterizations even when relevant experimental data are scarce as in the case of electronically excited molecules. Specifically, our scheme assigns the atomic polarizabilities of any given molecule in such a way that its molecular polarizability tensor is well reproduced. We show that our scheme successfully works for various molecules in mimicking dipole responses not only in ground states but also in valence excited states. The electrostatic potential around a molecule with an externally perturbing nearby charge also exhibits a near-quantitative agreement with the reference data from quantum chemical calculations. The limitation of the model with isotropic atoms is also discussed to examine the scope of its applicability.

  9. A nonempirical anisotropic atom-atom model potential for chlorobenzene crystals.

    PubMed

    Day, Graeme M; Price, Sarah L

    2003-12-31

    A nearly nonempirical, transferable model potential is developed for the chlorobenzene molecules (C6ClnH6-n, n = 1 to 6) with anisotropy in the atom-atom form of both electrostatic and repulsion interactions. The potential is largely derived from the charge densities of the molecules, using a distributed multipole electrostatic model and a transferable dispersion model derived from the molecular polarizabilities. A nonempirical transferable repulsion model is obtained by analyzing the overlap of the charge densities in dimers as a function of orientation and separation and then calibrating this anisotropic atom-atom model against a limited number of intermolecular perturbation theory calculations of the short-range energies. The resulting model potential is a significant improvement over empirical model potentials in reproducing the twelve chlorobenzene crystal structures. Further validation calculations of the lattice energies and rigid-body k = 0 phonon frequencies provide satisfactory agreement with experiment, with the discrepancies being primarily due to approximations in the theoretical methods rather than the model intermolecular potential. The potential is able to give a good account of the three polymorphs of p-dichlorobenzene in a detailed crystal structure prediction study. Thus, by introducing repulsion anisotropy into a transferable potential scheme, it is possible to produce a set of potentials for the chlorobenzenes that can account for their crystal properties in an unprecedentedly realistic fashion.

  10. Algebraic direct methods for few-atoms structure models.

    PubMed

    Hauptman, Herbert A; Guo, D Y; Xu, Hongliang; Blessing, Robert H

    2002-07-01

    As a basis for direct-methods phasing at very low resolution for macromolecular crystal structures, normalized structure-factor algebra is presented for few-atoms structure models with N = 1, 2, 3, em leader equal atoms or polyatomic globs per unit cell. Main results include: [see text]. Triplet discriminant Delta(hk) and triplet weight W(hk) parameters, a approximately 4.0 and b approximately 3.0, respectively, were determined empirically in numerical error analyses. Tests with phases calculated for few-atoms 'super-glob' models of the protein apo-D-glyceraldehyde-3-phosphate dehydrogenase (approximately 10000 non-H atoms) showed that low-resolution phases from the new few-atoms tangent formula were much better than conventional tangent formula phases for N = 2 and 3; phases from the two formulae were essentially the same for N > or = 4.

  11. Atomicrex—a general purpose tool for the construction of atomic interaction models

    NASA Astrophysics Data System (ADS)

    Stukowski, Alexander; Fransson, Erik; Mock, Markus; Erhart, Paul

    2017-07-01

    We introduce atomicrex, an open-source code for constructing interatomic potentials as well as more general types of atomic-scale models. Such effective models are required to simulate extended materials structures comprising many thousands of atoms or more, because electronic structure methods become computationally too expensive at this scale. atomicrex covers a wide range of interatomic potential types and fulfills many needs in atomistic model development. As inputs, it supports experimental property values as well as ab initio energies and forces, to which models can be fitted using various optimization algorithms. The open architecture of atomicrex allows it to be used in custom model development scenarios beyond classical interatomic potentials while thanks to its Python interface it can be readily integrated e.g., with electronic structure calculations or machine learning algorithms.

  12. A quantitative confidence signal detection model: 1. Fitting psychometric functions

    PubMed Central

    Yi, Yongwoo

    2016-01-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  13. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    PubMed

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.

  14. Accurate bolus arrival time estimation using piecewise linear model fitting

    NASA Astrophysics Data System (ADS)

    Abdou, Elhassan; de Mey, Johan; De Ridder, Mark; Vandemeulebroucke, Jef

    2017-02-01

    Dynamic contrast-enhanced computed tomography (DCE-CT) is an emerging radiological technique, which consists in acquiring a rapid sequence of CT images, shortly after the injection of an intravenous contrast agent. The passage of the contrast agent in a tissue results in a varying CT intensity over time, recorded in time-attenuation curves (TACs), which can be related to the contrast supplied to that tissue via the supplying artery to estimate the local perfusion and permeability characteristics. The time delay between the arrival of the contrast bolus in the feeding artery and the tissue of interest, called the bolus arrival time (BAT), needs to be determined accurately to enable reliable perfusion analysis. Its automated identification is however highly sensitive to noise. We propose an accurate and efficient method for estimating the BAT from DCE-CT images. The method relies on a piecewise linear TAC model with four segments and suitable parameter constraints for limiting the range of possible values. The model is fitted to the acquired TACs in a multiresolution fashion using an iterative optimization approach. The performance of the method was evaluated on simulated and real perfusion data of lung and rectum tumours. In both cases, the method was found to be stable, leading to average accuracies in the order of the temporal resolution of the dynamic sequence. For reasonable levels of noise, the results were found to be comparable to those obtained using a previously proposed method, employing a full search algorithm, but requiring an order of magnitude more computation time.

  15. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  16. Fitness voter model: Damped oscillations and anomalous consensus

    NASA Astrophysics Data System (ADS)

    Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico

    2017-09-01

    We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k ≥0 , in addition to its + or - opinion state. The evolution of the distribution of k -values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k -values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1 -p the opposite happens. The agent that keeps its opinion (winning agent) increments its k -value by one. We study the dynamics of the system in the entire 0 ≤p ≤1 range and compare with the case p =1 /2 , in which opinions are decoupled from the k -values and the dynamics is equivalent to that of the standard voter model. When 0 ≤p <1 /2 , agents with higher k -values are less persuasive, and the system approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N , and it is greatly decreased relative to the linear behavior τ ˜N found in the standard voter model. When 1 /2

    model, although it still scales linearly with N . The p =1 case is special, with a relaxation to coexistence that scales as t-2.73 and a consensus time

  17. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    ERIC Educational Resources Information Center

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  18. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    ERIC Educational Resources Information Center

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  19. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  20. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  1. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  2. Level set methods for modelling field evaporation in atom probe.

    PubMed

    Haley, Daniel; Moody, Michael P; Smith, George D W

    2013-12-01

    Atom probe is a nanoscale technique for creating three-dimensional spatially and chemically resolved point datasets, primarily of metallic or semiconductor materials. While atom probe can achieve local high-level resolution, the spatial coherence of the technique is highly dependent upon the evaporative physics in the material and can often result in large geometric distortions in experimental results. The distortions originate from uncertainties in the projection function between the field evaporating specimen and the ion detector. Here we explore the possibility of continuum numerical approximations to the evaporative behavior during an atom probe experiment, and the subsequent propagation of ions to the detector, with particular emphasis placed on the solution of axisymmetric systems, such as isolated particles and multilayer systems. Ultimately, this method may prove critical in rapid modeling of tip shape evolution in atom probe tomography, which itself is a key factor in the rapid generation of spatially accurate reconstructions in atom probe datasets.

  3. Using symbolic computing in building probabilistic models for atoms

    NASA Astrophysics Data System (ADS)

    Guiasu, Silviu

    This article shows how symbolic computing and the mathematical formalism induced by maximizing entropy and minimizing the mean deviation from statistical equilibrium may be effectively applied to obtaining probabilistic models for the structure of atoms, using trial wave functions compatible with an average shell picture of the atom. The objective is not only to recover the experimental value of the ground state mean energy of the atom, but rather to better approximate the unknown parameters of these trial functions and to calculate both correlations between electrons and the amount of interdependence among different subsets of electrons of the atoms. The examples and numerical results refer to the hydrogen, helium, lithium, and beryllium atoms. The main computer programs, using the symbolic computing software MATHEMATICA, are also given.

  4. A 4096 atom model of amorphous silicon: Structure and dynamics

    NASA Astrophysics Data System (ADS)

    Feldman, Joseph L.; Bickham, Scott R.; Davidson, Brian N.; Wooten, Frederick

    1997-03-01

    We present structural and lattice dynamical information for a 4096 atom model of amorphous silicon. The structural model was obtained, similarly to previously published smaller models, using periodic boundary conditions, the Wooten-Winer-Weaire bond-switching algorithm, and the Broughton-Li relaxation with respect to the Stillinger-Weber potential. The structure is dynamically stable and there is no evidence in the radial distribution function of medium range order. For examining this large model, we use a 1000 processor Connection Machine to compute all the eigenvalues and eigenvectors exactly. The phonon density of states and inverse participation ratio are compared with results for related 216, 432 and 1000-atom models.

  5. Monte Carlo Computational Modeling of Atomic Oxygen Interactions

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.

    2017-01-01

    Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model

  6. Keratoconus, cross-link-induction, comparison between fitting exponential function and a fitting equation obtained by a mathematical model.

    PubMed

    Albanese, A; Urso, R; Bianciardi, L; Rigato, M; Battisti, E

    2009-11-01

    With reference to experimental data in the literature, we present a model consisting of two elastic elements, conceived to simulate resistance to stretching, at constant velocity of elongation, of corneal tissue affected by keratoconus, treated with riboflavin and ultraviolet irradiation to induce cross-linking. The function describing model behaviour adapted to stress and strain values. It was found that the Young's moduli of the two elastic elements increased in cross-linked tissues and that cross-linking treatment therefore increased corneal rigidity. It is recognized that this observation is substantially in line with the conclusion reported in the literature, obtained using an exponential fitting function. It is observed, however, that the latter function implies a condition of non-zero stresses without strain, and does not provide interpretative insights for lack of any biomechanical basis. Above all, the function fits a singular trend, inexplicably claimed to be viscoelastic, with surprising perfection. In any case, using the reported data, the study demonstrates that a fitting equation obtained by a modelling approach not only shows the evident efficacy of the treatment, but also provides orientations for studying modifications induced in cross-linked fibres.

  7. Percentile Analysis for Goodness-of-Fit Comparisons of Models to Data

    DTIC Science & Technology

    2014-07-01

    comparisons of theoretical predictions to empirical data reflect an alternating dialectic between theory building and experimentation (cf... McClelland , 2009). A common way of assessing the fit of a model to data is to employ statistical goodness-of-fit measures. One such measure is the...limitations of using R2 and RMSE as model evaluation metrics. Consider the fit of two hypothetical cognitive models, Theory A and Theory R² = 0.93

  8. EFFICIENT MODEL-FITTING AND MODEL-COMPARISON FOR HIGH-DIMENSIONAL BAYESIAN GEOSTATISTICAL MODELS. (R826887)

    EPA Science Inventory

    Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...

  9. EFFICIENT MODEL-FITTING AND MODEL-COMPARISON FOR HIGH-DIMENSIONAL BAYESIAN GEOSTATISTICAL MODELS. (R826887)

    EPA Science Inventory

    Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...

  10. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  11. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  12. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  13. A Comparison of Model-Data Fit for Parametric and Nonparametric Item Response Theory Models Using Ordinal-Level Ratings

    ERIC Educational Resources Information Center

    Dyehouse, Melissa A.

    2009-01-01

    This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…

  14. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.

    PubMed

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P

    2017-04-01

    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  15. Nagaoka’s atomic model and hyperfine interactions

    PubMed Central

    INAMURA, Takashi T.

    2016-01-01

    The prevailing view of Nagaoka’s “Saturnian” atom is so misleading that today many people have an erroneous picture of Nagaoka’s vision. They believe it to be a system involving a ‘giant core’ with electrons circulating just outside. Actually, though, in view of the Coulomb potential related to the atomic nucleus, Nagaoka’s model is exactly the same as Rutherford’s. This is true of the Bohr atom, too. To give proper credit, Nagaoka should be remembered together with Rutherford and Bohr in the history of the atomic model. It is also pointed out that Nagaoka was a pioneer of understanding hyperfine interactions in order to study nuclear structure. PMID:27063182

  16. Quantum entanglement in two-electron atomic models

    NASA Astrophysics Data System (ADS)

    Manzano, D.; Plastino, A. R.; Dehesa, J. S.; Koga, T.

    2010-07-01

    We explore the main entanglement properties exhibited by the eigenfunctions of two exactly soluble two-electron models, the Crandall atom and the Hooke atom, and compare them with the entanglement features of helium-like systems. We compute the amount of entanglement associated with the wavefunctions corresponding to the fundamental and first few excited states of these models. We investigate the dependence of the entanglement on the parameters of the models and on the quantum numbers of the eigenstates. It is found that the amount of entanglement of the system tends to increase with energy in both models. In addition, we study the entanglement of a few states of helium-like systems, which we compute using high-quality Kinoshita-like eigenfunctions. The dependence of the entanglement of helium-like atoms on the nuclear charge and on energy is found to be consistent with the trends observed in the previous two model systems.

  17. The Search for "Optimal" Cutoff Properties: Fit Index Criteria in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Sivo, Stephen A.; Xitao, Fan; Witta, E. Lea; Willse, John T.

    2006-01-01

    This study is a partial replication of L. Hu and P. M. Bentler's (1999) fit criteria work. The purpose of this study was twofold: (a) to determine whether cut-off values vary according to which model is the true population model for a dataset and (b) to identify which of 13 fit indexes behave optimally by retaining all of the correct models while…

  18. Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.

    ERIC Educational Resources Information Center

    Chernyshenko, Oleksandr S.; Stark, Stephen; Chan, Kim-Yin; Drasgow, Fritz; Williams, Bruce

    2001-01-01

    Compared the fit of several Item Response Theory (IRT) models to two personality assessment instruments using data from 13,059 individuals responding to one instrument and 1,770 individuals responding to the other. Two- and three-parameter logistic models fit some scales reasonably well, but not others, and the graded response model generally did…

  19. Atomic model of the human cardiac muscle myosin filament.

    PubMed

    Al-Khayat, Hind A; Kensler, Robert W; Squire, John M; Marston, Steven B; Morris, Edward P

    2013-01-02

    Of all the myosin filaments in muscle, the most important in terms of human health, and so far the least studied, are those in the human heart. Here we report a 3D single-particle analysis of electron micrograph images of negatively stained myosin filaments isolated from human cardiac muscle in the normal (undiseased) relaxed state. The resulting 28-Å resolution 3D reconstruction shows axial and azimuthal (no radial) myosin head perturbations within the 429-Å axial repeat, with rotations between successive 132 Å-, 148 Å-, and 149 Å-spaced crowns of heads close to 60°, 35°, and 25° (all would be 40° in an unperturbed three-stranded helix). We have defined the myosin head atomic arrangements within the three crown levels and have modeled the organization of myosin subfragment 2 and the possible locations of the 39 Å-spaced domains of titin and the cardiac isoform of myosin-binding protein-C on the surface of the myosin filament backbone. Best fits were obtained with head conformations on all crowns close to the structure of the two-headed myosin molecule of vertebrate chicken smooth muscle in the dephosphorylated relaxed state. Individual crowns show differences in head-pair tilts and subfragment 2 orientations, which, together with the observed perturbations, result in different intercrown head interactions, including one not reported before. Analysis of the interactions between the myosin heads, the cardiac isoform of myosin-binding protein-C, and titin will aid in understanding of the structural effects of mutations in these proteins known to be associated with human cardiomyopathies.

  20. Corrections to the paper {open_quotes}fitting the armitage-doll model to radiation-exposed cohorts and implications for population cancer risks{close_quotes}

    SciTech Connect

    Little, M.P.; Hawkins, M.M.; Charles, M.W.; Hildreth, N.G.

    1994-01-01

    A recent paper analyzed patterns of cancer in the Japanese atomic bomb survivors and three other groups exposed to radiation by fitting the so-called multistage model of Armitage and Doll. The paper concluded that the incidence of solid cancer could be described adequately by a model in which up to two stages affected by radiation were assumed but that the data for leukemia within the bomb survivors might not be so well fitted. This was in part because of a failure to account for the observed linear-quadratic dose response that has been observed in the Japanese cohort. It has recently come to our attention that there was a mistake in the fits of the model with two adjacent radiation-affected stages, whereby the quadratic coefficient in dose was being set to zero in all the fits. This paper provides corrections in the calculations for the model and discusses the results.

  1. A fitted neoprene garment to cover dressings in swine models.

    PubMed

    Mino, Matthew J; Mauskar, Neil A; Matt, Sara E; Pavlovich, Anna R; Prindeze, Nicholas J; Moffatt, Lauren T; Shupp, Jeffrey W

    2012-12-17

    Domesticated porcine species are commonly used in studies of wound healing, owing to similarities between porcine skin and human skin. Such studies often involve wound dressings, and keeping these dressings intact on the animal can be a challenge. The authors describe a novel and simple technique for constructing a fitted neoprene garment for pigs that covers dressings and maintains their integrity during experiments.

  2. The FITS model office ergonomics program: a model for best practice.

    PubMed

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  3. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules

    SciTech Connect

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O. Anatole

    2015-07-01

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models’ predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  4. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    ERIC Educational Resources Information Center

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  5. Phenomenological model of spin crossover in molecular crystals as derived from atom-atom potentials.

    PubMed

    Sinitskiy, Anton V; Tchougréeff, Andrei L; Dronskowski, Richard

    2011-08-07

    The method of atom-atom potentials, previously applied to the analysis of pure molecular crystals formed by either low-spin (LS) or high-spin (HS) forms (spin isomers) of Fe(II) coordination compounds (Sinitskiy et al., Phys. Chem. Chem. Phys., 2009, 11, 10983), is used to estimate the lattice enthalpies of mixed crystals containing different fractions of the spin isomers. The crystals under study were formed by LS and HS isomers of Fe(phen)(2)(NCS)(2) (phen = 1,10-phenanthroline), Fe(btz)(2)(NCS)(2) (btz = 5,5',6,6'-tetrahydro-4H,4'H-2,2'-bi-1,3-thiazine), and Fe(bpz)(2)(bipy) (bpz = dihydrobis(1-pyrazolil)borate, and bipy = 2,2'-bipyridine). For the first time the phenomenological parameters Γ pertinent to the Slichter-Drickamer model (SDM) of several materials were independently derived from the microscopic model of the crystals with use of atom-atom potentials of intermolecular interaction. The accuracy of the SDM was checked against the numerical data on the enthalpies of mixed crystals. Fair semiquantitative agreement with the experimental dependence of the HS fraction on temperature was achieved with use of these values. Prediction of trends in Γ values as a function of chemical composition and geometry of the crystals is possible with the proposed approach, which opens a way to rational design of spin crossover materials with desired properties.

  6. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    SciTech Connect

    Sader, John E.; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  7. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  8. Building Relativistic Mean-Field Models for Atomic Nuclei and Neutron Stars

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Chia; Piekarewicz, Jorge

    2014-03-01

    Nuclear energy density functional (EDF) theory has been quite successful in describing nuclear systems such as atomic nuclei and nuclear matter. However, when building new models, attention is usually paid to the best-fit parameters only. In recent years, focus has been shifted to the neighborhood around the minimum of the chi-square function as well. This powerful covariance analysis is able to provide important information bridging experiments, observations, and theories. In this work, we attempt to build a specific type of nuclear EDFs, the relativistic mean-field models, which treat atomic nuclei, nuclear matter, and neutron stars on the same footing. The application of covariance analysis can reveal correlations between observables of interest. The purpose is to elucidate the alleged relations between the neutron skin of heavy nuclei and the size of neutron stars, and to develop insight into future investigations.

  9. Physically representative atomistic modeling of atomic-scale friction

    NASA Astrophysics Data System (ADS)

    Dong, Yalin

    Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the

  10. Modeling noncontact atomic force microscopy resolution on corrugated surfaces.

    PubMed

    Burson, Kristen M; Yamamoto, Mahito; Cullen, William G

    2012-01-01

    Key developments in NC-AFM have generally involved atomically flat crystalline surfaces. However, many surfaces of technological interest are not atomically flat. We discuss the experimental difficulties in obtaining high-resolution images of rough surfaces, with amorphous SiO(2) as a specific case. We develop a quasi-1-D minimal model for noncontact atomic force microscopy, based on van der Waals interactions between a spherical tip and the surface, explicitly accounting for the corrugated substrate (modeled as a sinusoid). The model results show an attenuation of the topographic contours by ~30% for tip distances within 5 Å of the surface. Results also indicate a deviation from the Hamaker force law for a sphere interacting with a flat surface.

  11. Systemization and Use of Atomic Data for Astrophysical Modeling

    NASA Astrophysics Data System (ADS)

    Brickhouse, Nancy S.

    The growth of supercomputing capabilities over the past decade has led to tremendous opportunities to produce theoretical atomic data for use in modeling astrophysical plasmas; however, not all data are created equally. Both accurate and complete atomic data are important to astrophysical modeling via plasma codes. Critical theoretical evaluations, experimental benchmarks, and even astrophysical observations themselves are all useful in assessing the temperatures, densities, opacities, abundances, and other physical quantities determined from spectroscopy. The database challenge for the future is to move toward increasing both flexibility and standardization of atomic data formats so that the quality of the data can be taken into account in modeling. Problems in X-ray spectroscopy will illustrate the importance of opening up what has often been a black box to many astronomers.

  12. The atomic approach for the Coqblin-Schrieffer model

    NASA Astrophysics Data System (ADS)

    Figueira, M. S.; Saguia, A.; Foglio, M. E.; Silva-Valencia, J.; Franco, R.

    2014-12-01

    In this work we consider the Coqblin-Schrieffer model when the spin is S = 1 / 2. The atomic solution has eight states: four conduction and two localized states, and we can then calculate the eigenenergies and eigenstates analytically. From this solution, employing the cumulant Green's functions results of the Anderson model, we build a "seed", that works as the input of the atomic approach, developed earlier by some of us. We obtain the T-matrix as well as the conduction Green's function of the model, both for the impurity and the lattice cases. The generalization for other moments within N states follows the same steps. We present results both for the impurity as well as for the lattice case and we indicate possible applications of the method to study ultra cold atoms confined in optical superlattices and Kondo insulators. In this last case, our results support an insulator-metal transition as a function of the temperature.

  13. Person-Fit Statistics for Joint Models for Accuracy and Speed

    ERIC Educational Resources Information Center

    Fox, Jean-Paul; Marianti, Sukaesi

    2017-01-01

    Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…

  14. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  15. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  16. Performance of the Generalized S-X[Superscript 2] Item Fit Index for Polytomous IRT Models

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2008-01-01

    Orlando and Thissen's S-X[superscript 2] item fit index has performed better than traditional item fit statistics such as Yen' s Q[subscript 1] and McKinley and Mill' s G[superscript 2] for dichotomous item response theory (IRT) models. This study extends the utility of S-X[superscript 2] to polytomous IRT models, including the generalized partial…

  17. Performance of the Generalized S-X[Superscript 2] Item Fit Index for Polytomous IRT Models

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2008-01-01

    Orlando and Thissen's S-X[superscript 2] item fit index has performed better than traditional item fit statistics such as Yen' s Q[subscript 1] and McKinley and Mill' s G[superscript 2] for dichotomous item response theory (IRT) models. This study extends the utility of S-X[superscript 2] to polytomous IRT models, including the generalized partial…

  18. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  19. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  20. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  1. Why Should We Assess the Goodness-of-Fit of IRT Models?

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    In this rejoinder, Maydeu-Olivares states that, in item response theory (IRT) measurement applications, the application of goodness-of-fit (GOF) methods informs researchers of the discrepancy between the model and the data being fitted (the room for improvement). By routinely reporting the GOF of IRT models, together with the substantive results…

  2. Why Should We Assess the Goodness-of-Fit of IRT Models?

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    In this rejoinder, Maydeu-Olivares states that, in item response theory (IRT) measurement applications, the application of goodness-of-fit (GOF) methods informs researchers of the discrepancy between the model and the data being fitted (the room for improvement). By routinely reporting the GOF of IRT models, together with the substantive results…

  3. A Model-Based Approach to Goodness-of-Fit Evaluation in Item Response Theory

    ERIC Educational Resources Information Center

    Oberski, Daniel L.; Vermunt, Jeroen K.

    2013-01-01

    These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…

  4. A Cautionary Note on the Use of Information Fit Indexes in Covariance Structure Modeling with Means

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases in which models without mean restrictions (i.e.,…

  5. A Model-Based Approach to Goodness-of-Fit Evaluation in Item Response Theory

    ERIC Educational Resources Information Center

    Oberski, Daniel L.; Vermunt, Jeroen K.

    2013-01-01

    These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…

  6. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  7. Modeling of Turbulence Effects on Liquid Jet Atomization and Breakup

    NASA Technical Reports Server (NTRS)

    Trinh, Huu; Chen, C. P.

    2004-01-01

    Recent experimental investigations and physical modeling studies have indicated that turbulence behaviors within a liquid jet have considerable effects on the atomization process. For certain flow regimes, it has been observed that the liquid jet surface is highly turbulent. This turbulence characteristic plays a key role on the breakup of the liquid jet near to the injector exit. Other experiments also showed that the breakup length of the liquid core is sharply shortened as the liquid jet is changed from the laminar to the turbulent flow conditions. In the numerical and physical modeling arena, most of commonly used atomization models do not include the turbulence effect. Limited attempts have been made in modeling the turbulence phenomena on the liquid jet disintegration. The subject correlation and models treat the turbulence either as an only source or a primary driver in the breakup process. This study aims to model the turbulence effect in the atomization process of a cylindrical liquid jet. In the course of this study, two widely used models, Reitz's primary atomization (blob) and Taylor-Analogy-Break (TAB) secondary droplet breakup by O Rourke et al. are examined. Additional terms are derived and implemented appropriately into these two models to account for the turbulence effect on the atomization process. Since this enhancement effort is based on a framework of the two existing atomization models, it is appropriate to denote the two present models as T-blob and T-TAB for the primary and secondary atomization predictions, respectively. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic time scales and the initial flow conditions. This treatment offers a balance of contributions of individual physical phenomena on the liquid breakup process. For the secondary breakup, an addition turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. The drop size

  8. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  9. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  10. Atomic-scale modeling of cellulose nanocrystals

    NASA Astrophysics Data System (ADS)

    Wu, Xiawa

    Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to

  11. Accurate model annotation of a near-atomic resolution cryo-EM map

    DOE PAGES

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; ...

    2017-03-07

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo- EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structuralmore » features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.« less

  12. Accurate model annotation of a near-atomic resolution cryo-EM map.

    PubMed

    Hryc, Corey F; Chen, Dong-Hua; Afonine, Pavel V; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D; King, Jonathan A; Schmid, Michael F; Chiu, Wah

    2017-03-21

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.

  13. Accurate model annotation of a near-atomic resolution cryo-EM map

    PubMed Central

    Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D.; King, Jonathan A.; Schmid, Michael F.; Chiu, Wah

    2017-01-01

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages. PMID:28270620

  14. Atomic-accuracy models from 4.5-Å cryo-electron microscopy data with density-guided iterative local refinement.

    PubMed

    DiMaio, Frank; Song, Yifan; Li, Xueming; Brunner, Matthias J; Xu, Chunfu; Conticello, Vincent; Egelman, Edward; Marlovits, Thomas; Cheng, Yifan; Baker, David

    2015-04-01

    We describe a general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution. The method integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting. In tests on experimental maps of three different systems with 4.5-Å resolution or better, the method consistently produced models with atomic-level accuracy largely independently of starting-model quality, and it outperformed the molecular dynamics-based MDFF method. Cross-validated model quality statistics correlated with model accuracy over the three test systems.

  15. The Thorny Relation Between Measurement Quality and Fit Index Cutoffs in Latent Variable Models.

    PubMed

    McNeish, Daniel; An, Ji; Hancock, Gregory R

    2017-03-02

    Latent variable modeling is a popular and flexible statistical framework. Concomitant with fitting latent variable models is assessment of how well the theoretical model fits the observed data. Although firm cutoffs for these fit indexes are often cited, recent statistical proofs and simulations have shown that these fit indexes are highly susceptible to measurement quality. For instance, a root mean square error of approximation (RMSEA) value of 0.06 (conventionally thought to indicate good fit) can actually indicate poor fit with poor measurement quality (e.g., standardized factors loadings of around 0.40). Conversely, an RMSEA value of 0.20 (conventionally thought to indicate very poor fit) can indicate acceptable fit with very high measurement quality (standardized factor loadings around 0.90). Despite the wide-ranging effect on applications of latent variable models, the high level of technical detail involved with this phenomenon has curtailed the exposure of these important findings to empirical researchers who are employing these methods. This article briefly reviews these methodological studies in minimal technical detail and provides a demonstration to easily quantify the large influence measurement quality has on fit index values and how greatly the cutoffs would change if they were derived under an alternative level of measurement quality. Recommendations for best practice are also discussed.

  16. A model to predict image formation in Atom probe Tomography.

    PubMed

    Vurpillot, F; Gaillard, A; Da Costa, G; Deconihout, B

    2013-09-01

    A model devoted to the modelling of the field evaporation of a tip is presented in this paper. The influence of length scales from the atomic scale to the macroscopic scale is taken into account in this approach. The evolution of the tip shape is modelled at the atomic scale in a three dimensional geometry with cylindrical symmetry. The projection law of ions is determined using a realistic representation of the tip geometry including the presence of electrodes in the surrounding area of the specimen. This realistic modelling gives a direct access to the voltage required to field evaporate, to the evolving magnification in the microscope and to the understanding of reconstruction artefacts when the presence of phases with different evaporation fields and/or different dielectric permittivity constants are modelled. This model has been applied to understand the field evaporation behaviour in bulk dielectric materials. In particular the role of the residual conductivity of dielectric materials is addressed.

  17. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  18. Model based control of dynamic atomic force microscope

    SciTech Connect

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-15

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H{sub ∞} control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  19. Model based control of dynamic atomic force microscope

    NASA Astrophysics Data System (ADS)

    Lee, Chibum; Salapaka, Srinivasa M.

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H∞ control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  20. Model based control of dynamic atomic force microscope.

    PubMed

    Lee, Chibum; Salapaka, Srinivasa M

    2015-04-01

    A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.

  1. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  2. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  3. A Fitness Index model for Italian adolescents living in Southern Italy: the ASSO project.

    PubMed

    Bianco, Antonino; Mammina, Caterina; Jemni, Monèm; Filippi, Anna R; Patti, Antonino; Thomas, Ewan; Paoli, Antonio; Palma, Antonio; Tabacchi, Garden

    2016-11-01

    Strong relations between physical fitness and health in adolescents have been established in the last decades. The main objectives of the present investigation were to assess major physical fitness components in a sample of Italian school adolescents, comparing them with international data, and providing a Fitness Index model derived from percentile cut-off values of five considered physical fitness components. A total of 644 school pupils (15.9±1.1 years; M: N.=399; F: N.=245) were tested using the ASSO-Fitness Test Battery (FTB), a tool developed within the Adolescents and Surveillance System for the Obesity prevention project, which included the handgrip, standing broad-jump, sit-up to exhaustion, 4×10-m shuttle run and 20-m shuttle run tests. Stratified percentile values and related smoothed curves were obtained. The method of principal components analysis (PCA) was applied to the considered five fitness components to derive a continuous fitness level score (the Fit-Score). A Likert-type scale on the Fit-Score values was applied to obtain an intuitive classification of the individual level of fitness: very poor (Xfitness levels compared to girls. They also showed an incremental trend amongst fitness levels with age in all physical components. These results could be overlapped with those related to European adolescents. Data revealed high correlations (r>0.5) between the Fit-Score and all the fitness components. The median Fit-Score was equal to 33 for females and 53 for males (in a scale from 0 to 100). The ASSO-FTB allowed the assessment of health-related fitness components in a convenient sample of Italian adolescents and provided a Fitness Index model incorporating all these components for an intuitive classification of fitness levels. If this model is confirmed, the monitoring of these variables will allow early detection of health-related issues

  4. Assessing Fit of Cognitive Diagnostic Models: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Almond, Russell G.

    2007-01-01

    A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

  5. Assessing Fit of Cognitive Diagnostic Models: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Almond, Russell G.

    2007-01-01

    A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

  6. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    SciTech Connect

    Quan, Wei; Lv, Lin Liu, Baiqi

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  7. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope.

    PubMed

    Quan, Wei; Lv, Lin; Liu, Baiqi

    2014-11-01

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  8. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  9. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion

    PubMed Central

    Mac, Amy; Rhodes, Gillian; Webster, Michael A.

    2015-01-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects. PMID:27551353

  10. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion.

    PubMed

    O'Neil, Sean F; Mac, Amy; Rhodes, Gillian; Webster, Michael A

    2015-12-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects.

  11. A comparison of united atom, explicit atom, and coarse-grained simulation models for poly(ethylene oxide).

    PubMed

    Chen, Chunxia; Depa, Praveen; Sakai, Victoria García; Maranas, Janna K; Lynn, Jeffrey W; Peral, Inmaculada; Copley, John R D

    2006-06-21

    We compare static and dynamic properties obtained from three levels of modeling for molecular dynamics simulation of poly(ethylene oxide) (PEO). Neutron scattering data are used as a test of each model's accuracy. The three simulation models are an explicit atom (EA) model (all the hydrogens are taken into account explicitly), a united atom (UA) model (CH(2) and CH(3) groups are considered as a single unit), and a coarse-grained (CG) model (six united atoms are taken as one bead). All three models accurately describe the PEO static structure factor as measured by neutron diffraction. Dynamics are assessed by comparison to neutron time of flight data, which follow self-motion of protons. Hydrogen atom motion from the EA model and carbon/oxygen atom motion from the UA model closely follow the experimental hydrogen motion, while hydrogen atoms reinserted in the UA model are too fast. The EA and UA models provide a good description of the orientation properties of C-H vectors measured by nuclear magnetic resonance experiments. Although dynamic observables in the CG model are in excellent agreement with their united atom counterparts, they cannot be compared to neutron data because the time after which the CG model is valid is greater than the neutron decay times.

  12. A Nonlinear Model for Fuel Atomization in Spray Combustion

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave

    2003-01-01

    Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.

  13. A Nonlinear Model for Fuel Atomization in Spray Combustion

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave

    2003-01-01

    Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.

  14. NMR shielding tensors for density fitted local second-order Møller-Plesset perturbation theory using gauge including atomic orbitals

    NASA Astrophysics Data System (ADS)

    Loibl, Stefan; Schütz, Martin

    2012-08-01

    An efficient method for the calculation of nuclear magnetic resonance (NMR) shielding tensors is presented, which treats electron correlation at the level of second-order Møller-Plesset perturbation theory. It uses spatially localized functions to span occupied and virtual molecular orbital spaces, respectively, which are expanded in a basis of gauge including atomic orbitals (GIAOs or London atomic orbitals). Doubly excited determinants are restricted to local subsets of the virtual space and pair energies with an interorbital distance beyond a certain threshold are omitted. Furthermore, density fitting is employed to factorize the electron repulsion integrals. Ordinary Gaussians are employed as fitting functions. It is shown that the errors in the resulting NMR shielding constant, introduced (i) by the local approximation and (ii) by density fitting, are very small or even negligible. The capabilities of the new program are demonstrated by calculations on some extended molecular systems, such as the cyclobutane pyrimidine dimer photolesion with adjacent nucleobases in the native intrahelical DNA double strand (ATTA sequence). Systems of that size were not accessible to correlated ab initio calculations of NMR spectra before. The presented method thus opens the door to new and interesting applications in this area.

  15. NMR shielding tensors for density fitted local second-order Møller-Plesset perturbation theory using gauge including atomic orbitals.

    PubMed

    Loibl, Stefan; Schütz, Martin

    2012-08-28

    An efficient method for the calculation of nuclear magnetic resonance (NMR) shielding tensors is presented, which treats electron correlation at the level of second-order Mo̸ller-Plesset perturbation theory. It uses spatially localized functions to span occupied and virtual molecular orbital spaces, respectively, which are expanded in a basis of gauge including atomic orbitals (GIAOs or London atomic orbitals). Doubly excited determinants are restricted to local subsets of the virtual space and pair energies with an interorbital distance beyond a certain threshold are omitted. Furthermore, density fitting is employed to factorize the electron repulsion integrals. Ordinary Gaussians are employed as fitting functions. It is shown that the errors in the resulting NMR shielding constant, introduced (i) by the local approximation and (ii) by density fitting, are very small or even negligible. The capabilities of the new program are demonstrated by calculations on some extended molecular systems, such as the cyclobutane pyrimidine dimer photolesion with adjacent nucleobases in the native intrahelical DNA double strand (ATTA sequence). Systems of that size were not accessible to correlated ab initio calculations of NMR spectra before. The presented method thus opens the door to new and interesting applications in this area.

  16. Generalized separable parameter space techniques for fitting 1K-5K serial compartment models

    PubMed Central

    Kadrmas, Dan J.; Oktay, M. Bugrahan

    2013-01-01

    Purpose: Kinetic modeling is widely used to analyze dynamic imaging data, estimating kinetic parameters that quantify functional or physiologic processes in vivo. Typical kinetic models give rise to nonlinear solution equations in multiple dimensions, presenting a complex fitting environment. This work generalizes previously described separable nonlinear least-squares techniques for fitting serial compartment models with up to three tissue compartments and five rate parameters. Methods: The approach maximally separates the linear and nonlinear aspects of the modeling equations, using a formulation modified from previous basis function methods to avoid a potential mathematical degeneracy. A fast and robust algorithm for solving the linear subproblem with full user-defined constraints is also presented. The generalized separable parameter space technique effectively reduces the dimensionality of the nonlinear fitting problem to one dimension for 2K-3K compartment models, and to two dimensions for 4K-5K models. Results: Exhaustive search fits, which guarantee identification of the true global minimum fit, required approximately 10 ms for 2K-3K and 1.1 s for 4K-5K models, respectively. The technique is also amenable to fast gradient-descent iterative fitting algorithms, where the reduced dimensionality offers improved convergence properties. The objective function for the separable parameter space nonlinear subproblem was characterized and found to be generally well-behaved with a well-defined global minimum. Separable parameter space fits with the Levenberg-Marquardt algorithm required fewer iterations than comparable fits for conventional model formulations, averaging 1 and 7 ms for 2K-3K and 4K-5K models, respectively. Sensitivity to initial conditions was likewise reduced. Conclusions: The separable parameter space techniques described herein generalize previously described techniques to encompass 1K-5K compartment models, enable robust solution of the linear

  17. Generalized separable parameter space techniques for fitting 1K-5K serial compartment models.

    PubMed

    Kadrmas, Dan J; Oktay, M Bugrahan

    2013-07-01

    Kinetic modeling is widely used to analyze dynamic imaging data, estimating kinetic parameters that quantify functional or physiologic processes in vivo. Typical kinetic models give rise to nonlinear solution equations in multiple dimensions, presenting a complex fitting environment. This work generalizes previously described separable nonlinear least-squares techniques for fitting serial compartment models with up to three tissue compartments and five rate parameters. The approach maximally separates the linear and nonlinear aspects of the modeling equations, using a formulation modified from previous basis function methods to avoid a potential mathematical degeneracy. A fast and robust algorithm for solving the linear subproblem with full user-defined constraints is also presented. The generalized separable parameter space technique effectively reduces the dimensionality of the nonlinear fitting problem to one dimension for 2K-3K compartment models, and to two dimensions for 4K-5K models. Exhaustive search fits, which guarantee identification of the true global minimum fit, required approximately 10 ms for 2K-3K and 1.1 s for 4K-5K models, respectively. The technique is also amenable to fast gradient-descent iterative fitting algorithms, where the reduced dimensionality offers improved convergence properties. The objective function for the separable parameter space nonlinear subproblem was characterized and found to be generally well-behaved with a well-defined global minimum. Separable parameter space fits with the Levenberg-Marquardt algorithm required fewer iterations than comparable fits for conventional model formulations, averaging 1 and 7 ms for 2K-3K and 4K-5K models, respectively. Sensitivity to initial conditions was likewise reduced. The separable parameter space techniques described herein generalize previously described techniques to encompass 1K-5K compartment models, enable robust solution of the linear subproblem with full user-defined constraints

  18. Local correlation energies of atoms, ions and model systems

    NASA Astrophysics Data System (ADS)

    Umrigar, Cyrus; Huang, Chien-Jung

    1997-03-01

    We present nearly local definitions of the correlation energy density, and its potential and kinetic components, and evaluate them for several atoms, ions and model systems. This information provides valuable guidance in constructing better correlation functionals than those in common use, such as the local density approximation (LDA) and the various generalized gradient approximations (GGAs). The true local correlation energy per electron has oscillations, reflecting the shell-structure, whereas the LDA approximation to it is monotonic. In addition we demonstrate that, for two-electron systems, the quantum chemistry and the density functional definitions of the correlation energy approach each other with increasing atomic number as 1/Z^3.

  19. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Exact Person Fit Indexes for the Rasch Model for Arbitrary Alternatives.

    ERIC Educational Resources Information Center

    Ponocny, Ivo

    2000-01-01

    Introduces a new algorithm for obtaining exact person fit indexes for the Rasch model. The algorithm realizes most tests for a general family of alternative hypotheses, including tests concerning differential item functioning. The method is also used as a goodness-of-fit test in some circumstances. Simulated examples and an empirical investigation…

  1. Goodness of Fit Confirmatory Factor Analysis: The Effects of Sample Size and Model Parsimony.

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Balla, John

    The influence of sample size (N) and model parsimony on a set of 22 goodness of fit indices was investigated, including those typically used in confirmatory factor analysis and some recently developed indices. For sample data simulated from 2 known population data structures, values for 6 of 22 fit indices were reasonably independent of N and were…

  2. Exact Person Fit Indexes for the Rasch Model for Arbitrary Alternatives.

    ERIC Educational Resources Information Center

    Ponocny, Ivo

    2000-01-01

    Introduces a new algorithm for obtaining exact person fit indexes for the Rasch model. The algorithm realizes most tests for a general family of alternative hypotheses, including tests concerning differential item functioning. The method is also used as a goodness-of-fit test in some circumstances. Simulated examples and an empirical investigation…

  3. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  4. The Relation among Fit Indexes, Power, and Sample Size in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Kim, Kevin H.

    2005-01-01

    The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…

  5. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  6. Modeling Protein Structure at Near Atomic Resolutions With Gorgon

    PubMed Central

    Baker, Matthew L.; Abeysinghe, Sasakthi S.; Schuh, Stephen; Coleman, Ross A.; Abrams, Austin; Marsh, Michael P.; Hryc, Corey F.; Ruths, Troy; Chiu, Wah; Ju, Tao

    2011-01-01

    Electron cryo-microscopy (cryo-EM) has played an increasingly important role in elucidating the structure and function of macromolecular assemblies in near native solution conditions. Typically, however, only non-atomic resolution reconstructions have been obtained for these large complexes, necessitating computational tools for integrating and extracting structural details. With recent advances in cryo-EM, maps at near-atomic resolutions have been achieved for several macromolecular assemblies from which models have been manually constructed. In this work, we describe a new interactive modeling toolkit called Gorgon targeted at intermediate to near-atomic resolution density maps (10-3.5 Å), particularly from cryo-EM. Gorgon's de novo modeling procedure couples sequence-based secondary structure prediction with feature detection and geometric modeling techniques to generate initial protein backbone models. Beyond model building, Gorgon is an extensible interactive visualization platform with a variety of computational tools for annotating a wide variety of 3D volumes. Examples from cryo-EM maps of Rotavirus and Rice Dwarf Virus are used to demonstrate its applicability to modeling protein structure. PMID:21296162

  7. Application of the model of delocalized atoms to metallic glasses

    NASA Astrophysics Data System (ADS)

    Sanditov, D. S.; Darmaev, M. V.; Sanditov, B. D.

    2017-01-01

    The parameters of the model of delocalized atoms applied to metallic glasses have been calculated using the data on empirical constants of the Vogel-Fulcher-Tammann equation (for the temperature dependence of viscosity). It has been shown that these materials obey the same glass-formation criterion as amorphous organic polymers and inorganic glasses. This fact qualitatively confirms the universality of the main regularities of the liquid-glass transition process for all amorphous materials regardless of their origin. The energy of the delocalization of an atom in metallic glasses, Δɛ e ≈ 20-25 kJ/mol, coincides with the results obtained for oxide inorganic glasses. It is substantially lower than the activation energies for a viscous flow and for ion diffusion. The delocalization of an atom (its displacement from the equilibrium position) for amorphous metallic alloys is a low-energy small-scale process similar to that for other glass-like systems.

  8. Atomic detection in microwave cavity experiments: A dynamical model

    SciTech Connect

    Rossi, R. Jr.; Nemes, M. C.; Peixoto de Faria, J. G.

    2007-06-15

    We construct a model for the atomic detection in the context of cavity quantum electrodynamics (QED) used to study coherence properties of superpositions of states of an electromagnetic mode. Analytic expressions for the atomic ionization are obtained, considering the imperfections of the measurement process due to the probabilistic nature of the interactions between the ionization field and the atoms. We provide for a dynamical content for the available expressions for the counting rates considering limited efficiency of detectors. Moreover, we include false countings. The influence of these imperfections on the information about the state of the cavity mode is obtained. In order to test the adequacy of our approach, we investigate a recent experiment reported by Maitre [X. Maitre et al., Phys. Rev. Lett. 79, 769 (1997)] and we obtain excellent agreement with the experimental results.

  9. Atomic Data and Modelling for Fusion: the ADAS Project

    NASA Astrophysics Data System (ADS)

    Summers, H. P.; O'Mullane, M. G.

    2011-05-01

    The paper is an update on the Atomic Data and Analysis Structure, ADAS, since ICAM-DATA06 and a forward look to its evolution in the next five years. ADAS is an international project supporting principally magnetic confinement fusion research. It has participant laboratories throughout the world, including ITER and all its partner countries. In parallel with ADAS, the ADAS-EU Project provides enhanced support for fusion research at Associated Laboratories and Universities in Europe and ITER. OPEN-ADAS, sponsored jointly by the ADAS Project and IAEA, is the mechanism for open access to principal ADAS atomic data classes and facilitating software for their use. EXTENDED-ADAS comprises a variety of special, integrated application software, beyond the purely atomic bounds of ADAS, tuned closely to specific diagnostic analyses and plasma models. The current scientific content and scope of these various ADAS and ADAS related activities are briefly reviewed. These span a number of themes including heavy element spectroscopy and models, charge exchange spectroscopy, beam emission spectroscopy and special features which provide a broad baseline of atomic modelling and support. Emphasis will be placed on `lifting the fundamental data baseline'—a principal ADAS task for the next few years. This will include discussion of ADAS and ADAS-EU coordinated and shared activities and some of the methods being exploited.

  10. Atomic Data and Modelling for Fusion: the ADAS Project

    SciTech Connect

    Summers, H. P.; O'Mullane, M. G.

    2011-05-11

    The paper is an update on the Atomic Data and Analysis Structure, ADAS, since ICAM-DATA06 and a forward look to its evolution in the next five years. ADAS is an international project supporting principally magnetic confinement fusion research. It has participant laboratories throughout the world, including ITER and all its partner countries. In parallel with ADAS, the ADAS-EU Project provides enhanced support for fusion research at Associated Laboratories and Universities in Europe and ITER. OPEN-ADAS, sponsored jointly by the ADAS Project and IAEA, is the mechanism for open access to principal ADAS atomic data classes and facilitating software for their use. EXTENDED-ADAS comprises a variety of special, integrated application software, beyond the purely atomic bounds of ADAS, tuned closely to specific diagnostic analyses and plasma models.The current scientific content and scope of these various ADAS and ADAS related activities are briefly reviewed. These span a number of themes including heavy element spectroscopy and models, charge exchange spectroscopy, beam emission spectroscopy and special features which provide a broad baseline of atomic modelling and support. Emphasis will be placed on 'lifting the fundamental data baseline'--a principal ADAS task for the next few years. This will include discussion of ADAS and ADAS-EU coordinated and shared activities and some of the methods being exploited.

  11. Individual Differences and Fitting Methods for the Two-Choice Diffusion Model of Decision Making

    PubMed Central

    Ratcliff, Roger; Childers, Russ

    2015-01-01

    Methods of fitting the diffusion model were examined with a focus on what the model can tell us about individual differences. Diffusion model parameters were obtained from the fits to data from two experiments and consistency of parameter values, individual differences, and practice effects were examined using different numbers of observations from each subject. Two issues were examined, first, what sizes of differences between groups can be obtained to distinguish between groups and second, what sizes of differences would be needed to find individual subjects that had a deficit relative to a control group. The parameter values from the experiments provided ranges that were used in a simulation study to examine recovery of individual differences. This study used several diffusion model fitting programs, fitting methods, and published packages. In a second simulation study, 64 sets of simulated data from each of 48 sets of parameter values (spanning the range of typical values obtained from fits to data) were fit with the different methods and biases and standard deviations in recovered model parameters were compared across methods. Finally, in a third simulation study, a comparison between a standard chi-square method and a hierarchical Bayesian method was performed. The results from these studies can be used as a starting point for selecting fitting methods and as a basis for understanding the strengths and weaknesses of using diffusion model analyses to examine individual differences in clinical, neuropsychological, and educational testing. PMID:26236754

  12. Atomic scale modeling of boron transient diffusion in silicon

    SciTech Connect

    Caturla, M. J.; Diaz de la Rubia, T.; Foad, M.; Giles, M.; Johnson, M. D.; Law, M.; Lilak, A.

    1998-06-17

    We presents results from a predictive atomic level simulation of Boron diffusion in Silicon under a wide variety of implant and annealing conditions. The parameters for this simulation have been extracted from first principle approximation models and molecular dynamics simulations. The results are compared with experiments showing good agreement in all cases. The parameters and reactions used have been implemented into a continuum-level model simulator.

  13. Theory and modelling of diamond fracture from an atomic perspective.

    PubMed

    Brenner, Donald W; Shenderova, Olga A

    2015-03-28

    Discussed in this paper are several theoretical and computational approaches that have been used to better understand the fracture of both single-crystal and polycrystalline diamond at the atomic level. The studies, which include first principles calculations, analytic models and molecular simulations, have been chosen to illustrate the different ways in which this problem has been approached, the conclusions and their reliability that have been reached by these methods, and how these theory and modelling methods can be effectively used together.

  14. An experimentally determined evolutionary model dramatically improves phylogenetic fit.

    PubMed

    Bloom, Jesse D

    2014-08-01

    All modern approaches to molecular phylogenetics require a quantitative model for how genes evolve. Unfortunately, existing evolutionary models do not realistically represent the site-heterogeneous selection that governs actual sequence change. Attempts to remedy this problem have involved augmenting these models with a burgeoning number of free parameters. Here, I demonstrate an alternative: Experimental determination of a parameter-free evolutionary model via mutagenesis, functional selection, and deep sequencing. Using this strategy, I create an evolutionary model for influenza nucleoprotein that describes the gene phylogeny far better than existing models with dozens or even hundreds of free parameters. Emerging high-throughput experimental strategies such as the one employed here provide fundamentally new information that has the potential to transform the sensitivity of phylogenetic and genetic analyses.

  15. Fringe Fitting

    NASA Astrophysics Data System (ADS)

    Cotton, W. D.

    Fringe Fitting Theory; Correlator Model Delay Errors; Fringe Fitting Techniques; Baseline; Baseline with Closure Constraints; Global; Solution Interval; Calibration Sources; Source Structure; Phase Referencing; Multi-band Data; Phase-Cals; Multi- vs. Single-band Delay; Sidebands; Filtering; Establishing a Common Reference Antenna; Smoothing and Interpolating Solutions; Bandwidth Synthesis; Weights; Polarization; Fringe Fitting Practice; Phase Slopes in Time and Frequency; Phase-Cals; Sidebands; Delay and Rate Fits; Signal-to-Noise Ratios; Delay and Rate Windows; Details of Global Fringe Fitting; Multi- and Single-band Delays; Phase-Cal Errors; Calibrator Sources; Solution Interval; Weights; Source Model; Suggested Procedure; Bandwidth Synthesis

  16. Nuclei-selected atomic-orbital response-theory formulation for the calculation of NMR shielding tensors using density-fitting

    NASA Astrophysics Data System (ADS)

    Kumar, Chandan; Kjærgaard, Thomas; Helgaker, Trygve; Fliegl, Heike

    2016-12-01

    An atomic orbital density matrix based response formulation of the nuclei-selected approach of Beer, Kussmann, and Ochsenfeld [J. Chem. Phys. 134, 074102 (2011)] to calculate nuclear magnetic resonance (NMR) shielding tensors has been developed and implemented into LSDalton allowing for a simultaneous solution of the response equations, which significantly improves the performance. The response formulation to calculate nuclei-selected NMR shielding tensors can be used together with the density-fitting approximation that allows efficient calculation of Coulomb integrals. It is shown that using density-fitting does not lead to a significant loss in accuracy for both the nuclei-selected and the conventional ways to calculate NMR shielding constants and should thus be used for applications with LSDalton.

  17. Nuclei-selected atomic-orbital response-theory formulation for the calculation of NMR shielding tensors using density-fitting.

    PubMed

    Kumar, Chandan; Kjærgaard, Thomas; Helgaker, Trygve; Fliegl, Heike

    2016-12-21

    An atomic orbital density matrix based response formulation of the nuclei-selected approach of Beer, Kussmann, and Ochsenfeld [J. Chem. Phys. 134, 074102 (2011)] to calculate nuclear magnetic resonance (NMR) shielding tensors has been developed and implemented into LSDalton allowing for a simultaneous solution of the response equations, which significantly improves the performance. The response formulation to calculate nuclei-selected NMR shielding tensors can be used together with the density-fitting approximation that allows efficient calculation of Coulomb integrals. It is shown that using density-fitting does not lead to a significant loss in accuracy for both the nuclei-selected and the conventional ways to calculate NMR shielding constants and should thus be used for applications with LSDalton.

  18. Low Resolution Refinement of Atomic Models Against Crystallographic Data.

    PubMed

    Nicholls, Robert A; Kovalevskiy, Oleg; Murshudov, Garib N

    2017-01-01

    This review describes some of the problems encountered during low-resolution refinement and map calculation. Refinement is considered as an application of Bayes' theorem, allowing combination of information from various sources including crystallographic experimental data and prior chemical and structural knowledge. The sources of prior knowledge relevant to macromolecules include basic chemical information such as bonds and angles, structural information from reference models of known homologs, knowledge about secondary structures, hydrogen bonding patterns, and similarity of non-crystallographically related copies of a molecule. Additionally, prior information encapsulating local conformational conservation is exploited, keeping local interatomic distances similar to those in the starting atomic model. The importance of designing an accurate likelihood function-the only link between model parameters and observed data-is emphasized. The review also reemphasizes the importance of phases, and describes how the use of raw observed amplitudes could give a better correlation between the calculated and "true" maps. It is shown that very noisy or absent observations can be replaced by calculated structure factors, weighted according to the accuracy of the atomic model. This approach helps to smoothen the map. However, such replacement should be used sparingly, as the bias toward errors in the model could be too much to avoid. It is in general recommended that, whenever a new map is calculated, map quality should be judged by inspection of the parts of the map where there is no atomic model. It is also noted that it is advisable to work with multiple blurred and sharpened maps, as different parts of a crystal may exhibit different degrees of mobility. Doing so can allow accurate building of atomic models, accounting for overall shape as well as finer structural details. Some of the results described in this review have been implemented in the programs REFMAC5, Pro

  19. Fitting and Testing Conditional Multinormal Partial Credit Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…

  20. New Data for Modeling Hypersonic Entry into Earth's Atmosphere: Electron-impact Ionization of Atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Savin, Daniel Wolf; Ciccarino, Christopher

    2017-06-01

    Meteors passing through Earth’s atmosphere and space vehicles returning to Earth from beyond orbit enter the atmosphere at hypersonic velocities (greater than Mach 5). The resulting shock front generates a high temperature reactive plasma around the meteor or vehicle (with temperatures greater than 10,000 K). This intense heat is transferred to the entering object by radiative and convective processes. Modeling the processes a meteor undergoes as it passes through the atmosphere and designing vehicles to withstand these conditions requires an accurate understanding of the underlying non-equilibrium high temperature chemistry. Nitrogen chemistry is particularly important given the abundance of nitrogen in Earth's atmosphere. Line emission by atomic nitrogen is a major source of radiative heating during atomspheric entry. Our ability to accurately calculate this heating is hindered by uncertainties in the electron-impact ionization (EII) rate coefficient for atomic nitrogen.Here we present new EII calculations for atomic nitrogen. The atom is treated as a 69 level system, incorporating Rydberg values up to n=20. Level-specific cross sections are from published B-Spline R-Matrix-with-Pseudostates results for the first three levels and binary-encounter Bethe (BEB) calculations that we have carried out for the remaining 59 levels. These cross section data have been convolved into level-specific rate coefficients and fit with the commonly-used Arrhenius-Kooij formula for ease of use in hypersonic chemical models. The rate coefficient data can be readily scaled by the relevant atomic nitrogen partition function which varies in time and space around the meteor or reentry vehicle. Providing data up to n=20 also enables modelers to account for the density-dependent lowering of the continuum.

  1. A constructive model potential method for atomic interactions

    NASA Technical Reports Server (NTRS)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  2. A constructive model potential method for atomic interactions

    NASA Technical Reports Server (NTRS)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  3. GISAXS studies of model nanocatalysts synthesized by atomic cluster deposition.

    SciTech Connect

    Vajda, S.; Winans, R. E.; Ballentine, G. E.; Elam, J. W.; Lee, B.; Pellin, M. J.; Seifert, S.; Tikhonov, G. Y.; Tomczyk, N. A.

    2006-01-01

    Small nanoparticles possess unique, strongly size-dependent chemical and physical properties that make these particles ideal candidates for a number of applications, including catalysts or sensors due to their significantly higher activity and selectivity than their more bulk-like analogs. In the smallest size regime, nanocluster catalytic activity changes by orders of magnitude with the addition or removal of a single atom, thus allowing a tuning of the properties of these particles atom by atom. Equally effective tuning knobs for these model catalysts are the composition and morphology of the support, which can dramatically change the electronic structure of these particles, leading to drastic changes in both activity and specificity. However, the Achilles heal of these particles remains their sintering at elevated temperatures or when exposed to mixtures of reactive gases. In the presented paper, the issues of thermal stability, isomerization and growth of models of catalytic active sites - atomic gold and platinum clusters and nanoparticles produced by cluster deposition on technologically relevan oxide surfaces - is addressed by employing synchrotron X-ray radiation techniques.

  4. Modeling of Turbulence Effect on Liquid Jet Atomization

    NASA Technical Reports Server (NTRS)

    Trinh, H. P.

    2007-01-01

    Recent studies indicate that turbulence behaviors within a liquid jet have considerable effect on the atomization process. Such turbulent flow phenomena are encountered in most practical applications of common liquid spray devices. This research aims to model the effects of turbulence occurring inside a cylindrical liquid jet to its atomization process. The two widely used atomization models Kelvin-Helmholtz (KH) instability of Reitz and the Taylor analogy breakup (TAB) of O'Rourke and Amsden portraying primary liquid jet disintegration and secondary droplet breakup, respectively, are examined. Additional terms are formulated and appropriately implemented into these two models to account for the turbulence effect. Results for the flow conditions examined in this study indicate that the turbulence terms are significant in comparison with other terms in the models. In the primary breakup regime, the turbulent liquid jet tends to break up into large drops while its intact core is slightly shorter than those without turbulence. In contrast, the secondary droplet breakup with the inside liquid turbulence consideration produces smaller drops. Computational results indicate that the proposed models provide predictions that agree reasonably well with available measured data.

  5. AtomDB and PyAtomDB: Atomic Data and Modelling Tools for High Energy and Non-Maxwellian Plasmas

    NASA Astrophysics Data System (ADS)

    Foster, Adam; Smith, Randall K.; Brickhouse, Nancy S.; Cui, Xiaohong

    2016-04-01

    The release of AtomDB 3 included a large wealth of inner shell ionization and excitation data allowing accurate modeling of non-equilibrium plasmas. We describe the newly calculated data and compare it to published literature data. We apply the new models to existing supernova remnant data such as W49B and N132D. We further outline progress towards AtomDB 3.1, including a new energy-dependent charge exchange cross sections.We present newly developed models for the spectra of electron-electron bremsstrahlung and those due to non-Maxwellian electron distributions.Finally, we present our new atomic database access tools, released as PyAtomDB, allowing powerful use of the underlying fundamental atomic data as well as the spectral emissivities.

  6. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  7. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2010-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  8. Revisiting the global electroweak fit of the Standard Model and beyond with Gfitter

    NASA Astrophysics Data System (ADS)

    Flächer, H.; Goebel, M.; Haller, J.; Hoecker, A.; Mönig, K.; Stelzer, J.

    2009-04-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing for flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plug-ins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model (SM), and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. In the SM fit including the direct Higgs searches, we find M H =116.4{-1.3/+18.3} GeV, and the 2 σ and 3 σ allowed regions [114,145] GeV and [[113,168] and [180,225

  9. A stochastic carcinogenesis model incorporating multiple types of genomic instability fitted to colon cancer data.

    PubMed

    Little, Mark P; Vineis, Paolo; Li, Guangquan

    2008-09-21

    A generalization of the two-mutation stochastic carcinogenesis model of Moolgavkar, Venzon and Knudson and certain models constructed by Little [Little, M.P. (1995). Are two mutations sufficient to cause cancer? Some generalizations of the two-mutation model of carcinogenesis of Moolgavkar, Venzon, and Knudson, and of the multistage model of Armitage and Doll. Biometrics 51, 1278-1291] and Little and Wright [Little, M.P., Wright, E.G. (2003). A stochastic carcinogenesis model incorporating genomic instability fitted to colon cancer data. Math. Biosci. 183, 111-134] is developed; the model incorporates multiple types of progressive genomic instability and an arbitrary number of mutational stages. The model is fitted to US Caucasian colon cancer incidence data. On the basis of the comparison of fits to the population-based data, there is little evidence to support the hypothesis that the model with more than one type of genomic instability fits better than models with a single type of genomic instability. Given the good fit of the model to this large dataset, it is unlikely that further information on presence of genomic instability or of types of genomic instability can be extracted from age-incidence data by extensions of this model.

  10. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  11. Quasi-atomic model of bacteriophage t7 procapsid shell: insights into the structure and evolution of a basic fold.

    PubMed

    Agirrezabala, Xabier; Velázquez-Muriel, Javier A; Gómez-Puertas, Paulino; Scheres, Sjors H W; Carazo, José M; Carrascosa, José L

    2007-04-01

    The existence of similar folds among major structural subunits of viral capsids has shown unexpected evolutionary relationships suggesting common origins irrespective of the capsids' host life domain. Tailed bacteriophages are emerging as one such family, and we have studied the possible existence of the HK97-like fold in bacteriophage T7. The procapsid structure at approximately 10 A resolution was used to obtain a quasi-atomic model by fitting a homology model of the T7 capsid protein gp10 that was based on the atomic structure of the HK97 capsid protein. A number of fold similarities, such as the fitting of domains A and P into the L-shaped procapsid subunit, are evident between both viral systems. A different feature is related to the presence of the amino-terminal domain of gp10 found at the inner surface of the capsid that might play an important role in the interaction of capsid and scaffolding proteins.

  12. Using proper regression methods for fitting the Langmuir model to sorption data

    USDA-ARS?s Scientific Manuscript database

    The Langmuir model, originally developed for the study of gas sorption to surfaces, is one of the most commonly used models for fitting phosphorus sorption data. There are good theoretical reasons, however, against applying this model to describe P sorption to soils. Nevertheless, the Langmuir model...

  13. XECT--a least squares curve fitting program for generalized radiotracer clearance model.

    PubMed

    Szczesny, S; Turczyński, B

    1991-01-01

    The program uses the joint Monte Carlo-Simplex algorithm for fitting the generalized, non-monoexponential model of externally detected decay of radiotracer activity in the tissue. The optimal values of the model parameters (together with the rate of the blood flow) are calculated. A table and plot of the experimental points and the fitted curve are generated. The program was written in Borland's Turbo Pascal 5.5 for the IBM PC XT/AT and compatible microcomputers.

  14. Fitness model for the Italian interbank money market

    NASA Astrophysics Data System (ADS)

    de Masi, G.; Iori, G.; Caldarelli, G.

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto’s law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  15. A no-scale inflationary model to fit them all

    SciTech Connect

    Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu

    2014-08-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic m{sup 2} φ{sup 2}/2 potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio r that is highly consistent with the Starobinsky R +R{sup 2} model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction n{sub s} ≅ 0.96.

  16. Fitness model for the Italian interbank money market.

    PubMed

    De Masi, G; Iori, G; Caldarelli, G

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  17. CHARMM36 united atom chain model for lipids and surfactants.

    PubMed

    Lee, Sarah; Tran, Alan; Allsopp, Matthew; Lim, Joseph B; Hénin, Jérôme; Klauda, Jeffery B

    2014-01-16

    Molecular simulations of lipids and surfactants require accurate parameters to reproduce and predict experimental properties. Previously, a united atom (UA) chain model was developed for the CHARMM27/27r lipids (Hénin, J., et al. J. Phys. Chem. B. 2008, 112, 7008-7015) but suffers from the flaw that bilayer simulations using the model require an imposed surface area ensemble, which limits its use to pure bilayer systems. A UA-chain model has been developed based on the CHARMM36 (C36) all-atom lipid parameters, termed C36-UA, and agreed well with bulk, lipid membrane, and micelle formation of a surfactant. Molecular dynamics (MD) simulations of alkanes (heptane and pentadecane) were used to test the validity of C36-UA on density, heat of vaporization, and liquid self-diffusion constants. Then, simulations using C36-UA resulted in accurate properties (surface area per lipid, X-ray and neutron form factors, and chain order parameters) of various saturated- and unsaturated-chain bilayers. When mixed with the all-atom cholesterol model and tested with a series of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC)/cholesterol mixtures, the C36-UA model performed well. Simulations of self-assembly of a surfactant (dodecylphosphocholine, DPC) using C36-UA suggest an aggregation number of 53 ± 11 DPC molecules at 0.45 M of DPC, which agrees well with experimental estimates. Therefore, the C36-UA force field offers a useful alternative to the all-atom C36 lipid force field by requiring less computational cost while still maintaining the same level of accuracy, which may prove useful for large systems with proteins.

  18. Atomically precise gold nanoclusters as new model catalysts.

    PubMed

    Li, Gao; Jin, Rongchao

    2013-08-20

    Many industrial catalysts involve nanoscale metal particles (typically 1-100 nm), and understanding their behavior at the molecular level is a major goal in heterogeneous catalyst research. However, conventional nanocatalysts have a nonuniform particle size distribution, while catalytic activity of nanoparticles is size dependent. This makes it difficult to relate the observed catalytic performance, which represents the average of all particle sizes, to the structure and intrinsic properties of individual catalyst particles. To overcome this obstacle, catalysts with well-defined particle size are highly desirable. In recent years, researchers have made remarkable advances in solution-phase synthesis of atomically precise nanoclusters, notably thiolate-protected gold nanoclusters. Such nanoclusters are composed of a precise number of metal atoms (n) and of ligands (m), denoted as Aun(SR)m, with n ranging up to a few hundred atoms (equivalent size up to 2-3 nm). These protected nanoclusters are well-defined to the atomic level (i.e., to the point of molecular purity), rather than defined based on size as in conventional nanoparticle synthesis. The Aun(SR)m nanoclusters are particularly robust under ambient or thermal conditions (<200 °C). In this Account, we introduce Aun(SR)m nanoclusters as a new, promising class of model catalyst. Research on the catalytic application of Aun(SR)m nanoclusters is still in its infancy, but we use Au₂₅(SR)₁₈ as an example to illustrate the promising catalytic properties of Aun(SR)m nanoclusters. Compared with conventional metallic nanoparticle catalysts, Aun(SR)m nanoclusters possess several distinct features. First of all, while gold nanoparticles typically adopt a face-centered cubic (fcc) structure, Aun(SR)m nanoclusters (<2 nm) tend to adopt different atom-packing structures; for example, Au₂₅(SR)₁₈ (1 nm metal core, Au atomic center to center distance) has an icosahedral structure. Secondly, their ultrasmall

  19. Design of spatial experiments: Model fitting and prediction

    SciTech Connect

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  20. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  1. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  2. MEAMfit: A reference-free modified embedded atom method (RF-MEAM) energy and force-fitting code

    NASA Astrophysics Data System (ADS)

    Duff, Andrew Ian

    2016-06-01

    MEAMfit v1.02. Changes: various bug fixes; speed of single-shot energy and force calculations (not optimization) increased by × 10; elements up to Cn (Z = 112) now correctly read from vasprun.xml files; EAM fits now produce Camelion output files; changed max number of vasprun.xml files to 10,000 (an unnecessary lower limit of 10 was allowed in the previous version).

  3. On assessing model fit for distribution-free longitudinal models under missing data.

    PubMed

    Wu, P; Tu, X M; Kowalski, J

    2014-01-15

    The generalized estimating equation (GEE), a distribution-free, or semi-parametric, approach for modeling longitudinal data, is used in a wide range of behavioral, psychotherapy, pharmaceutical drug safety, and healthcare-related research studies. Most popular methods for assessing model fit are based on the likelihood function for parametric models, rendering them inappropriate for distribution-free GEE. One rare exception is a score statistic initially proposed by Tsiatis for logistic regression (1980) and later extended by Barnhart and Willamson to GEE (1998). Because GEE only provides valid inference under the missing completely at random assumption and missing values arising in most longitudinal studies do not follow such a restricted mechanism, this GEE-based score test has very limited applications in practice. We propose extensions of this goodness-of-fit test to address missing data under the missing at random assumption, a more realistic model that applies to most studies in practice. We examine the performance of the proposed tests using simulated data and demonstrate the utilities of such tests with data from a real study on geriatric depression and associated medical comorbidities.

  4. Semirelativistic model for ionization of atomic hydrogen by electron impact

    NASA Astrophysics Data System (ADS)

    Attaourti, Y.; Taj, S.; Manaut, B.

    2005-06-01

    We present a semirelativistic model for the description of the ionization process of atomic hydrogen by electron impact in the first Born approximation by using the Darwin wave function to describe the bound state of atomic hydrogen and the Sommerfeld-Maue wave function to describe the ejected electron. This model, accurate to first order in Z/c in the relativistic correction, shows that, even at low kinetic energies of the incident electron, spin effects are small but not negligible. These effects become noticeable with increasing incident electron energies. All analytical calculations are exact and our semirelativistic results are compared with the results obtained in the nonrelativistic Coulomb Born approximation both for the coplanar asymmetric and the binary coplanar geometries.

  5. Extended Bose-Hubbard models with ultracold magnetic atoms.

    PubMed

    Baier, S; Mark, M J; Petter, D; Aikawa, K; Chomaz, L; Cai, Z; Baranov, M; Zoller, P; Ferlaino, F

    2016-04-08

    The Hubbard model underlies our understanding of strongly correlated materials. Whereas its standard form only comprises interactions between particles at the same lattice site, extending it to encompass long-range interactions is predicted to profoundly alter the quantum behavior of the system. We realize the extended Bose-Hubbard model for an ultracold gas of strongly magnetic erbium atoms in a three-dimensional optical lattice. Controlling the orientation of the atomic dipoles, we reveal the anisotropic character of the onsite interaction and hopping dynamics and their influence on the superfluid-to-Mott insulator quantum phase transition. Moreover, we observe nearest-neighbor interactions, a genuine consequence of the long-range nature of dipolar interactions. Our results lay the groundwork for future studies of exotic many-body quantum phases.

  6. Extended Bose-Hubbard models with ultracold magnetic atoms

    NASA Astrophysics Data System (ADS)

    Baier, S.; Mark, M. J.; Petter, D.; Aikawa, K.; Chomaz, L.; Cai, Z.; Baranov, M.; Zoller, P.; Ferlaino, F.

    2016-04-01

    The Hubbard model underlies our understanding of strongly correlated materials. Whereas its standard form only comprises interactions between particles at the same lattice site, extending it to encompass long-range interactions is predicted to profoundly alter the quantum behavior of the system. We realize the extended Bose-Hubbard model for an ultracold gas of strongly magnetic erbium atoms in a three-dimensional optical lattice. Controlling the orientation of the atomic dipoles, we reveal the anisotropic character of the onsite interaction and hopping dynamics and their influence on the superfluid-to-Mott insulator quantum phase transition. Moreover, we observe nearest-neighbor interactions, a genuine consequence of the long-range nature of dipolar interactions. Our results lay the groundwork for future studies of exotic many-body quantum phases.

  7. Semirelativistic model for ionization of atomic hydrogen by electron impact

    SciTech Connect

    Attaourti, Y.; Taj, S.; Manaut, B.

    2005-06-15

    We present a semirelativistic model for the description of the ionization process of atomic hydrogen by electron impact in the first Born approximation by using the Darwin wave function to describe the bound state of atomic hydrogen and the Sommerfeld-Maue wave function to describe the ejected electron. This model, accurate to first order in Z/c in the relativistic correction, shows that, even at low kinetic energies of the incident electron, spin effects are small but not negligible. These effects become noticeable with increasing incident electron energies. All analytical calculations are exact and our semirelativistic results are compared with the results obtained in the nonrelativistic Coulomb Born approximation both for the coplanar asymmetric and the binary coplanar geometries.

  8. Empirical model of atomic nitrogen in the upper thermosphere

    NASA Technical Reports Server (NTRS)

    Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.

    1977-01-01

    Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.

  9. Empirical model of atomic nitrogen in the upper thermosphere

    NASA Technical Reports Server (NTRS)

    Engebretson, M. J.; Mauersberger, K.; Kayser, D. C.; Potter, W. E.; Nier, A. O.

    1977-01-01

    Atomic nitrogen number densities in the upper thermosphere measured by the open source neutral mass spectrometer (OSS) on Atmosphere Explorer-C during 1974 and part of 1975 have been used to construct a global empirical model at an altitude of 375 km based on a spherical harmonic expansion. The most evident features of the model are large diurnal and seasonal variations of atomic nitrogen and only a moderate and latitude-dependent density increase during periods of geomagnetic activity. Maximum and minimum N number densities at 375 km for periods of low solar activity are 3.6 x 10 to the 6th/cu cm at 1500 LST (local solar time) and low latitude in the summer hemisphere and 1.5 x 10 to the 5th/cu cm at 0200 LST at mid-latitudes in the winter hemisphere.

  10. A KIM-compliant potfit for fitting sloppy interatomic potentials: application to the EDIP model for silicon

    NASA Astrophysics Data System (ADS)

    Wen, Mingjian; Li, Junhao; Brommer, Peter; Elliott, Ryan S.; Sethna, James P.; Tadmor, Ellad B.

    2017-01-01

    Fitted interatomic potentials are widely used in atomistic simulations thanks to their ability to compute the energy and forces on atoms quickly. However, the simulation results crucially depend on the quality of the potential being used. Force matching is a method aimed at constructing reliable and transferable interatomic potentials by matching the forces computed by the potential as closely as possible, with those obtained from first principles calculations. The potfit program is an implementation of the force-matching method that optimizes the potential parameters using a global minimization algorithm followed by a local minimization polish. We extended potfit in two ways. First, we adapted the code to be compliant with the KIM Application Programming Interface (API) standard (part of the Knowledgebase of Interatomic Models project). This makes it possible to use potfit to fit many KIM potential models, not just those prebuilt into the potfit code. Second, we incorporated the geodesic Levenberg-Marquardt (LM) minimization algorithm into potfit as a new local minimization algorithm. The extended potfit was tested by generating a training set using the KIM environment-dependent interatomic potential (EDIP) model for silicon and using potfit to recover the potential parameters from different initial guesses. The results show that EDIP is a ‘sloppy model’ in the sense that its predictions are insensitive to some of its parameters, which makes fitting more difficult. We find that the geodesic LM algorithm is particularly efficient for this case. The extended potfit code is the first step in developing a KIM-based fitting framework for interatomic potentials for bulk and two-dimensional materials. The code is available for download via https://www.potfit.net.

  11. The Chocolate Shop and Atomic Orbitals: A New Atomic Model Created by High School Students to Teach Elementary Students

    ERIC Educational Resources Information Center

    Liguori, Lucia

    2014-01-01

    Atomic orbital theory is a difficult subject for many high school and beginning undergraduate students, as it includes mathematical concepts not yet covered in the school curriculum. Moreover, it requires certain ability for abstraction and imagination. A new atomic orbital model "the chocolate shop" created "by" students…

  12. The Chocolate Shop and Atomic Orbitals: A New Atomic Model Created by High School Students to Teach Elementary Students

    ERIC Educational Resources Information Center

    Liguori, Lucia

    2014-01-01

    Atomic orbital theory is a difficult subject for many high school and beginning undergraduate students, as it includes mathematical concepts not yet covered in the school curriculum. Moreover, it requires certain ability for abstraction and imagination. A new atomic orbital model "the chocolate shop" created "by" students…

  13. An interface capturing scheme for modeling atomization in compressible flows

    NASA Astrophysics Data System (ADS)

    Garrick, Daniel P.; Hagen, Wyatt A.; Regele, Jonathan D.

    2017-09-01

    The study of atomization in supersonic flow is critical to ensuring reliable ignition of scramjet combustors under startup conditions. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in compressible flow requires robust numerical methods that can handle discontinuities caused by both shocks and material interfaces with high density ratios. In this work, a shock and interface capturing scheme is developed that uses the Harten-Lax-van Leer-Contact (HLLC) Riemann solver while a Tangent of Hyperbola for INterface Capturing (THINC) interface reconstruction scheme retains the fluid immiscibility condition in the volume fraction and phasic densities in the context of the five equation model. The approach includes the effects of compressibility, surface tension, and molecular viscosity. One and two-dimensional benchmark problems demonstrate the desirable interface sharpening and conservation properties of the approach. Simulations of secondary atomization of a cylindrical water column after its interaction with a shockwave show good qualitative agreement with experimentally observed behavior. Three-dimensional examples of primary atomization of a liquid jet in a Mach 2 crossflow demonstrate the robustness of the method.

  14. Fitting the Balding-Nichols model to forensic databases.

    PubMed

    Rohlfs, Rori V; Aguiar, Vitor R C; Lohmueller, Kirk E; Castro, Amanda M; Ferreira, Alessandro C S; Almeida, Vanessa C O; Louro, Iuri D; Nielsen, Rasmus

    2015-11-01

    Large forensic databases provide an opportunity to compare observed empirical rates of genotype matching with those expected under forensic genetic models. A number of researchers have taken advantage of this opportunity to validate some forensic genetic approaches, particularly to ensure that estimated rates of genotype matching between unrelated individuals are indeed slight overestimates of those observed. However, these studies have also revealed systematic error trends in genotype probability estimates. In this analysis, we investigate these error trends and show how they result from inappropriate implementation of the Balding-Nichols model in the context of database-wide matching. Specifically, we show that in addition to accounting for increased allelic matching between individuals with recent shared ancestry, studies must account for relatively decreased allelic matching between individuals with more ancient shared ancestry.

  15. Fit for consumption: zebrafish as a model for tuberculosis.

    PubMed

    Cronan, Mark R; Tobin, David M

    2014-07-01

    Despite efforts to generate new vaccines and antibiotics for tuberculosis, the disease remains a public health problem worldwide. The zebrafish Danio rerio has emerged as a useful model to investigate mycobacterial pathogenesis and treatment. Infection of zebrafish with Mycobacterium marinum, the closest relative of the Mycobacterium tuberculosis complex, recapitulates many aspects of human tuberculosis. The zebrafish model affords optical transparency, abundant genetic tools and in vivo imaging of the progression of infection. Here, we review how the zebrafish-M. marinum system has been deployed to make novel observations about the role of innate immunity, the tuberculous granuloma, and crucial host and bacterial genes. Finally, we assess how these findings relate to human disease and provide a framework for novel strategies to treat tuberculosis. © 2014. Published by The Company of Biologists Ltd.

  16. Mechanical Response of Polycarbonate with Strength Model Fits

    DTIC Science & Technology

    2012-02-01

    Proceedings of the Seventh International Symposium on Ballistics. The Hague (Netherlands), 1983, 541–547. 4. Zerilli, F .; Armstrong, R . A...Constitutive Model Equation for the Dynamic Deformation Behavior of Polymers. J. Material Sci. 2007, 42 (12), 4562–4574. 5. Zerilli, F .; Armstrong, R ...Polyvinylidene Diflouride. J. Polymer. 2005, 46, 12546–12555. 19. Richeton, J.; Ahzi, S.; Vecchio, K. S.; Jiang, F . C.; Adharapurapu, R . R . Influence of

  17. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  18. Testing the Rasch model by means of the mixture fit index.

    PubMed

    Formann, Anton K

    2006-05-01

    Rudas, Clogg, and Lindsay (RCL) proposed a new index of fit for contingency table analysis. Using the overparametrized two-component mixture, where the first component with weight 1-w represents the model to be tested and the second component with weight w is unstructured, the mixture index of fit was defined to be the smallest w compatible with the saturated two-component mixture. This index of fit, which is insensitive to sample size, is applied to the problem of assessing the fit of the Rasch model. In this application, use is made of the equivalence of the semi-parametric version of the Rasch model to specifically restricted latent class models. Therefore, the Rasch model can be represented by the structured component of the RCL mixture, with this component itself consisting of two or more subcomponents corresponding to the classes, and the unstructured component capturing the discrepancies between the data and the model. An empirical example demonstrates the application of this approach. Based on four-item data, the one- and two-class unrestricted latent class models and the one- to three-class models restricted according to the Rasch model are considered, with respect to both their chi-squared statistics and their mixture fit indices.

  19. Modeling of Atomic Processes for X-Ray Laser Plasmas

    DTIC Science & Technology

    1988-07-01

    temperatures where models such as Thomas-Fermi or Debye - Huckel are known to be inadequate. The calculations done here show that, with increasing plasma... theory . Comparison of experimental data with the IPA calculations shows that for some simple systems such as a neutral few-electron atom (Lithium, for...linear fashion - unlike Debye -screening, which is known to be inadequate for screening by bound electrons. The two-component DFM is applicable for

  20. Network growth models: A behavioural basis for attachment proportional to fitness

    NASA Astrophysics Data System (ADS)

    Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris

    2017-02-01

    Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.

  1. Network growth models: A behavioural basis for attachment proportional to fitness

    PubMed Central

    Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris

    2017-01-01

    Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example. PMID:28205599

  2. Genetic algorithm with an improved fitness function for (N)ARX modelling

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Worden, K.; Peng, P.; Leung, A. Y. T.

    2007-02-01

    In this article a new fitness function is introduced in an attempt to improve the quality of the auto-regressive with exogenous inputs (ARX) model using a genetic algorithm (GA). The GA is employed to identify the coefficients and the number of time lags of the models of dynamic systems with the new fitness function which is based on the prediction error and the correlation functions between the prediction error and the input and output signals. The new fitness function provides the GA with a better performance in the evolution process. Two examples of the ARX modelling of a linear and a non-linear (NARX) simulated dynamic system are examined using the proposed fitness function.

  3. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree-Fock, Thomas-Fermi-Dirac potential, and Dirac-Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit-Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  4. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  5. High precision measurements of atom column positions using model-based exit wave reconstruction.

    PubMed

    De Backer, A; Van Aert, S; Van Dyck, D

    2011-01-01

    In this paper, it has been investigated how to measure atom column positions as accurately and precisely as possible using a focal series of images. In theory, it is expected that the precision would considerably improve using a maximum likelihood estimator based on the full series of focal images. As such, the theoretical lower bound on the variances of the unknown atom column positions can be attained. However, this approach is numerically demanding. Therefore, maximum likelihood estimation has been compared with the results obtained by fitting a model to a reconstructed exit wave rather than to the full series of focal images. Hence, a real space model-based exit wave reconstruction technique based on the channelling theory is introduced. Simulations show that the reconstructed complex exit wave contains the same amount of information concerning the atom column positions as the full series of focal images. Only for thin samples, which act as weak phase objects, this information can be retrieved from the phase of the reconstructed complex exit wave.

  6. CPOPT : optimization for fitting CANDECOMP/PARAFAC models.

    SciTech Connect

    Dunlavy, Daniel M.; Kolda, Tamara Gibson; Acar, Evrim

    2008-10-01

    Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

  7. Are pollination "syndromes" predictive? Asian dalechampia fit neotropical models.

    PubMed

    Armbruster, W Scott; Gong, Yan-Bing; Huang, Shuang-Quan

    2011-07-01

    Using pollination syndrome parameters and pollinator correlations with floral phenotype from the Neotropics, we predicted that Dalechampia bidentata Blume (Euphorbiaceae) in southern China would be pollinated by female resin-collecting bees between 12 and 20 mm in length. Observations in southwestern Yunnan Province, China, revealed pollination primarily by resin-collecting female Megachile (Callomegachile) faceta Bingham (Hymenoptera: Megachilidae). These bees, at 14 mm in length, were in the predicted size range, confirming the utility of syndromes and models developed in distant regions. Phenotypic selection analyses and estimation of adaptive surfaces and adaptive accuracies together suggest that the blossoms of D. bidentata are well adapted to pollination by their most common floral visitors.

  8. Variable Transformation in Nonlinear Least Squares model Fitting

    DTIC Science & Technology

    1981-07-01

    Chemistry, Vol. 10, pp. 91-104, 1973. 11. H.J. Britt and R.H. Luecke , "The Estimation of Parameters in Nonlinear, Implicit Models", Technometrics, Vol...respect to the unknown C, 6, and K. This yields the following set of normal equations. 11 H.J, Britt and H.H. Lueake, "The Estimation of...Carbide Corporation Chemicals and Plastics ATTN: H.J. Britt P.O. Box 8361 Charleston, WV 25303 California Institute of Tech Guggenheim Aeronautical

  9. Fitting meta-analytic structural equation models with complex datasets.

    PubMed

    Wilson, Sandra Jo; Polanin, Joshua R; Lipsey, Mark W

    2016-06-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation coefficients that exhibit substantial heterogeneity, some of which obscures the relationships between the constructs of interest or undermines the comparability of the correlations across the cells. One component of this approach is a three-level random effects model capable of synthesizing a pooled correlation matrix with dependent correlation coefficients. Another component is a meta-regression that can be used to generate covariate-adjusted correlation coefficients that reduce the influence of selected unevenly distributed moderator variables. A non-technical presentation of these techniques is given, along with an illustration of the procedures with a meta-analytic dataset. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Use of a simulated annealing algorithm to fit compartmental models with an application to fractal pharmacokinetics.

    PubMed

    Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A

    2007-01-01

    Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models.

  11. Modeling of Turbulence Effects on Liquid Jet Atomization and Breakup

    NASA Technical Reports Server (NTRS)

    Trinh, Huu P.; Chen, C. P.

    2005-01-01

    Recent experimental investigations and physical modeling studies have indicated that turbulence behaviors within a liquid jet have considerable effects on the atomization process. This study aims to model the turbulence effect in the atomization process of a cylindrical liquid jet. Two widely used models, the Kelvin-Helmholtz (KH) instability of Reitz (blob model) and the Taylor-Analogy-Breakup (TAB) secondary droplet breakup by O Rourke et al, are further extended to include turbulence effects. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic scales and the initial flow conditions. For the secondary breakup, an additional turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. The drop size formed from this breakup regime is estimated based on the energy balance before and after the breakup occurrence. This paper describes theoretical development of the current models, called "T-blob" and "T-TAB", for primary and secondary breakup respectivety. Several assessment studies are also presented in this paper.

  12. Chemical domain of QSAR models from atom-centered fragments.

    PubMed

    Kühne, Ralph; Ebert, Ralf-Uwe; Schüürmann, Gerrit

    2009-12-01

    A methodology to characterize the chemical domain of qualitative and quantitative structure-activity relationship (QSAR) models based on the atom-centered fragment (ACF) approach is introduced. ACFs decompose the molecule into structural pieces, with each non-hydrogen atom of the molecule acting as an ACF center. ACFs vary with respect to their size in terms of the path length covered in each bonding direction starting from a given central atom and how comprehensively the neighbor atoms (including hydrogen) are described in terms of element type and bonding environment. In addition to these different levels of ACF definitions, the ACF match mode as degree of strictness of the ACF comparison between a test compound and a given ACF pool (such as from a training set) has to be specified. Analyses of the prediction statistics of three QSAR models with their training sets as well as with external test sets and associated subsets demonstrate a clear relationship between the prediction performance and the levels of ACF definition and match mode. The findings suggest that second-order ACFs combined with a borderline match mode may serve as a generic and at the same time a mechanistically sound tool to define and evaluate the chemical domain of QSAR models. Moreover, four standard categories of the ACF-based membership to a given chemical domain (outside, borderline outside, borderline inside, inside) are introduced that provide more specific information about the expected QSAR prediction performance. As such, the ACF-based characterization of the chemical domain appears to be particularly useful for QSAR applications in the context of REACH and other regulatory schemes addressing the safety evaluation of chemical compounds.

  13. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    USDA-ARS?s Scientific Manuscript database

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  14. Hydrothermal germination models: comparison of two data-fitting approaches with probit optimization

    USDA-ARS?s Scientific Manuscript database

    Probit models for estimating hydrothermal germination rate yield model parameters that have been associated with specific physiological processes. The desirability of linking germination response to seed physiology must be weighed against expectations of model fit and the relative accuracy of predi...

  15. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  16. Atomic collision processes for modelling cool star spectra

    NASA Astrophysics Data System (ADS)

    Barklem, Paul

    2015-05-01

    The abundances of chemical elements in cool stars are very important in many problems in modern astrophysics. They provide unique insight into the chemical and dynamical evolution of the Galaxy, stellar processes such as mixing and gravitational settling, the Sun and its place in the Galaxy, and planet formation, to name a just few examples. Modern telescopes and spectrographs measure stellar spectral lines with precision of order 1 per cent, and planned surveys will provide such spectra for millions of stars. However, systematic errors in the interpretation of observed spectral lines leads to abundances with uncertainties greater than 20 per cent. Greater precision in the interpreted abundances should reasonably be expected to lead to significant discoveries, and improvements in atomic data used in stellar atmosphere models play a key role in achieving such advances in precision. In particular, departures from the classical assumption of local thermodynamic equilibrium (LTE) represent a significant uncertainty in the modelling of stellar spectra and thus derived chemical abundances. Non-LTE modelling requires large amounts of radiative and collisional data for the atomic species of interest. I will focus on inelastic collision processes due to electron and hydrogen atom impacts, the important perturbers in cool stars, and the progress that has been made. I will discuss the impact on non-LTE modelling, and what the modelling tells us about the types of collision processes that are important and the accuracy required. More specifically, processes of fundamentally quantum mechanical nature such as spin-changing collisions and charge transfer have been found to be very important in the non-LTE modelling of spectral lines of lithium, oxygen, sodium and magnesium.

  17. Development of a program to fit data to a new logistic model for microbial growth.

    PubMed

    Fujikawa, Hiroshi; Kano, Yoshihiro

    2009-06-01

    Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.

  18. What is the "best" atomic charge model to describe through-space charge-transfer excitations?

    PubMed

    Jacquemin, Denis; Le Bahers, Tangui; Adamo, Carlo; Ciofini, Ilaria

    2012-04-28

    We investigate the efficiency of several partial atomic charge models (Mulliken, Hirshfeld, Bader, Natural, Merz-Kollman and ChelpG) for investigating the through-space charge-transfer in push-pull organic compounds with Time-Dependent Density Functional Theory approaches. The results of these models are compared to benchmark values obtained by determining the difference of total densities between the ground and excited states. Both model push-pull oligomers and two classes of "real-life" organic dyes (indoline and diketopyrrolopyrrole) used as sensitisers in solar cell applications have been considered. Though the difference of dipole moments between the ground and excited states is reproduced by most approaches, no atomic charge model is fully satisfactory for reproducing the distance and amount of charge transferred that are provided by the density picture. Overall, the partitioning schemes fitting the electrostatic potential (e.g. Merz-Kollman) stand as the most consistent compromises in the framework of simulating through-space charge-transfer, whereas the other models tend to yield qualitatively inconsistent values.

  19. EFFICIENT FITTING OF MULTIPLANET KEPLERIAN MODELS TO RADIAL VELOCITY AND ASTROMETRY DATA

    SciTech Connect

    Wright, J. T.; Howard, A. W.

    2009-05-15

    We describe a technique for solving for the orbital elements of multiple planets from radial velocity (RV) and/or astrometric data taken with 1 m s{sup -1} and {mu}as precision, appropriate for efforts to detect Earth-massed planets in their stars' habitable zones, such as NASA's proposed Space Interferometry Mission. We include details of calculating analytic derivatives for use in the Levenberg-Marquardt (LM) algorithm for the problems of fitting RV and astrometric data separately and jointly. We also explicate the general method of separating the linear and nonlinear components of a model fit in the context of an LM fit, show how explicit derivatives can be calculated in such a model, and demonstrate the speed up and convergence improvements of such a scheme in the case of a five-planet fit to published RV data for 55 Cnc.

  20. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    NASA Astrophysics Data System (ADS)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.

  1. Elastic properties of compressed cryocrystals in a deformed atom model

    NASA Astrophysics Data System (ADS)

    Gorbenko, Ie. Ie.; Zhikharev, I. V.; Troitskaya, E. P.; Chabanenko, Val. V.; Pilipenko, E. A.

    2013-06-01

    A model with deformed atom shells was built to investigate the elastic properties of rare-gas Ne and Kr crystals under high pressure. It is shown that the observed deviation from the Cauchy relation δ cannot be adequately reproduced when taking into account only the many-body interaction. The individual pressure dependence of δ is the result of competition of the many-body interaction and the quadrupole interaction associated with the quadrupole-type deformation of electron shells of the atoms during the displacement of the nuclei. Each kind of interaction makes a strongly pressure dependent contribution to δ. In the case of Ne and Kr, contributions of these interactions are compensated to a good precision, providing δ being almost constant against pressure.

  2. Theoretical model for electrophilic oxygen atom insertion into hydrocarbons

    SciTech Connect

    Bach, R.D.; Su, M.D. ); Andres, J.L. Wayne State Univ., Detroit, MI ); McDouall, J.J.W. )

    1993-06-30

    A theoretical model suggesting the mechanistic pathway for the oxidation of saturated-alkanes to their corresponding alcohols and ketones is described. Water oxide (H[sub 2]O-O) is employed as a model singlet oxygen atom donor. Molecular orbital calculations with the 6-31G basis set at the MP2, QCISD, QCISD(T), CASSCF, and MRCI levels of theory suggest that oxygen insertion by water oxide occurs by the interaction of an electrophilic oxygen atom with a doubly occupied hydrocarbon fragment orbital. The electrophilic oxygen approaches the hydrocarbon along the axis of the atomic carbon p orbital comprising a [pi]-[sub CH(2)] or [pi]-[sub CHCH(3)] fragment orbital to form a carbon-oxygen [sigma] bond. A concerted hydrogen migration to an adjacent oxygen lone pair of electrons affords the alcohol insertion product in a stereoselective fashion with predictable stereochemistry. Subsequent oxidation of the alcohol to a ketone (or aldehyde) occurs in a similar fashion and has a lower activation barrier. The calculated (MP4/6-31G*//MP2/6-31G*) activation barriers for oxygen atom insertion into the C-H bonds of methane, ethane, propane, butane, isobutane, and methanol are 10.7, 8.2, 3.9, 4.8, 4.5, and 3.3 kcal/mol, respectively. We use ab initio molecular orbital calculations in support of a frontier MO theory that provides a unique rationale for both the stereospecificity and the stereoselectivity of insertion of electrophilic oxygen and related electrophiles into the carbon-hydrogen bond. 13 refs., 7 figs., 2 tabs.

  3. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    PubMed Central

    Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN), intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue. PMID:23935704

  4. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Efficient fitting of conductance-based model neurons from somatic current clamp.

    PubMed

    Lepora, Nathan F; Overton, Paul G; Gurney, Kevin

    2012-02-01

    Estimating biologically realistic model neurons from electrophysiological data is a key issue in neuroscience that is central to understanding neuronal function and network behavior. However, directly fitting detailed Hodgkin-Huxley type model neurons to somatic membrane potential data is a notoriously difficult optimization problem that can require hours/days of supercomputing time. Here we extend an efficient technique that indirectly matches neuronal currents derived from somatic membrane potential data to two-compartment model neurons with passive dendrites. In consequence, this approach can fit semi-realistic detailed model neurons in a few minutes. For validation, fits are obtained to model-derived data for various thalamo-cortical neuron types, including fast/regular spiking and bursting neurons. A key aspect of the validation is sensitivity testing to perturbations arising in experimental data, including sampling rates, inadequately estimated membrane dynamics/channel kinetics and intrinsic noise. We find that maximal conductance estimates and the resulting membrane potential fits diverge smoothly and monotonically from near-perfect matches when unperturbed. Curiously, some perturbations have little effect on the error because they are compensated by the fitted maximal conductances. Therefore, the extended current-based technique applies well under moderately inaccurate model assumptions, as required for application to experimental data. Furthermore, the accompanying perturbation analysis gives insights into neuronal homeostasis, whereby tuning intrinsic neuronal properties can compensate changes from development or neurodegeneration.

  6. Atmospheric Properties Of T Dwarfs Inferred From Model Fits At Low Spectral Resolution

    NASA Astrophysics Data System (ADS)

    Giorla Godfrey, Paige A.; Rice, Emily L.; Filippazzo, Joseph C.; Douglas, Stephanie E.

    2016-09-01

    Brown dwarf spectral types (M, L, T, Y) correlate with spectral morphology, and generally appear to correspond with decreasing mass and effective temperature (Teff). Model fits to observed spectra suggest, however, that spectral subclasses do not share this monotonic temperature correlation, indicating that secondary parameters (gravity, metallicity, dust) significantly influence spectral morphology. We seekto disentangle the fundamental parameters that underlie the spectral type sequence of the coolest fully populated spectral class of brown dwarfs using atmosphere models. We investigate the relationship between spectral type and best fit model parameters for a sample of over 150 T dwarfs with low resolution (R 75-100) near-infrared ( 0.8-2.5 micron) SpeX Prism spectra. We use synthetic spectra from four model grids (Saumon & Marley 2008, Morley+ 2012, Saumon+ 2012, BT Settl 2013) and a Markov-Chain Monte Carlo (MCMC) analysis to determine robust best fit parameters and their uncertainties. We compare the consistency of each model grid by performing our analysis on the full spectrum and also on individual wavelength bands (Y,J,H,K). We find more consistent results between the J band and full spectrum fits and that our best fit spectral type-Teff results agree with the polynomial relationships of Stephens+2009 and Filippazzo+ 2015 using bolometric luminosities. Our analysis consists of the most extensive low resolution T dwarf model comparison to date, and lays the foundation for interpretation of cool brown dwarf and exoplanet spectra.

  7. Is Model Fitting Necessary for Model-Based fMRI?

    PubMed

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

  8. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  9. Bohr model and dimensional scaling analysis of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Urtekin, Kerim

    It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to many-electron systems, such as molecules, and nonhydrogenic atoms. It is the central theme of this dissertation to display with examples and applications the implementation of a simple and successful extension of Bohr's planetary model of the hydrogenic atom, which has recently been developed by an atomic and molecular theory group from Texas A&M University. This "extended" Bohr model, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable to those obtained from the more lengthy and rigorous "ab initio" computations, and the added advantage that it provides a rather insightful and pictorial description of how electrons behave to form chemical bonds, a theme not central to "ab initio" quantum chemistry. Further investigation directed to CH, and the four-atom system H4 (with both linear and square configurations), via the interpolated Bohr model, and the constrained Bohr model (with an effective potential), respectively, is reported. The extended model is also used to calculate correlation energies. The model is readily applicable to the study of molecular species in the presence of strong magnetic fields, as is the case in the vicinities of white dwarfs and neutron stars. We find that magnetic field increases the binding energy and decreases the bond length. Finally, an elaborative review of doubly coupled quantum dots for a derivation of the electron exchange energy, a straightforward application of Heitler-London method of quantum molecular chemistry, concludes the dissertation. The highlights of the research are (1) a bridging together of the pre- and post quantum mechanical descriptions of the chemical bond (Bohr-Sommerfeld vs. Heisenberg-Schrodinger), and

  10. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  11. Embedded-atom method potential for modeling hydrogen and hydrogen-defect interaction in tungsten.

    PubMed

    Wang, Li-Fang; Shu, Xiaolin; Lu, Guang-Hong; Gao, Fei

    2017-08-17

    An embedded-atom method potential has been developed for modeling hydrogen in body-centered-cubic (bcc) tungsten by fitting to an extensive database of density functional theory (DFT) calculations. Comprehensive evaluations of the new potential are conducted by comparing various hydrogen properties with DFT calculations and available experimental data, as well as all the other tungsten-hydrogen potentials. The new potential accurately reproduces the point defect properties of hydrogen, the interaction among hydrogen atoms, the interplay between hydrogen and a monovacancy, and the thermal diffusion of hydrogen in tungsten. The successful validation of the new potential confirms its good reliability and transferability, which enables large-scale atomistic simulations of tungsten-hydrogen system. The new potential is afterward employed to investigate the interplay between hydrogen and other defects, including [111] self-interstitial atoms (SIAs) and vacancy clusters in tungsten. It is found that both the [111] SIAs and the vacancy clusters exhibit considerable attraction for hydrogen. Hydrogen solution and diffusion in strained tungsten are also studied using the present potential, which demonstrates that tensile (compressive) stress facilitates (impedes) hydrogen solution, and isotropic tensile (compressive) stress impedes (facilitates) hydrogen diffusion while anisotropic tensile (compressive) stress facilitates (impedes) hydrogen diffusion. © 2017 IOP Publishing Ltd.

  12. Soft X-ray spectral fits of Geminga with model neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Meyer, R. D.; Pavlov, G. G.; Meszaros, P.

    1994-01-01

    The spectrum of the soft X-ray pulsar Geminga consists of two components, a softer one which can be interpreted as thermal-like radiation from the surface of the neutron star, and a harder one interpreted as radiation from a polar cap heated by relativistic particles. We have fitted the soft spectrum using a detailed magnetized hydrogen atmosphere model. The fitting parameters are the hydrogen column density, the effective temperature T(sub eff), the gravitational redshift z, and the distance to radius ratio, for different values of the magnetic field B. The best fits for this model are obtained when B less than or approximately 1 x 10(exp 12) G and z lies on the upper boundary of the explored range (z = 0.45). The values of T(sub eff) approximately = (2-3) x 10(exp 5) K are a factor of 2-3 times lower than the value of T(sub eff) obtained for blackbody fits with the same z. The lower T(sub eff) increases the compatibility with some proposed schemes for fast neutrino cooling of neutron stars (NSs) by the direct Urca process or by exotic matter, but conventional cooling cannot be excluded. The hydrogen atmosphere fits also imply a smaller distance to Geminga than that inferred from a blackbody fit. An accurate evaluation of the distance would require a better knowledge of the ROSAT Position Sensitive Proportional Counter (PSPC) response to the low-energy region of the incident spectrum. Our modeling of the soft component with a cooler magnetized atmosphere also implies that the hard-component fit requires a characteristic temperature which is higher (by a factor of approximately 2-3) and a surface area which is smaller (by a factor of 10(exp 3), compared to previous blackbody fits.

  13. Soft X-ray spectral fits of Geminga with model neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Meyer, R. D.; Pavlov, G. G.; Meszaros, P.

    1994-01-01

    The spectrum of the soft X-ray pulsar Geminga consists of two components, a softer one which can be interpreted as thermal-like radiation from the surface of the neutron star, and a harder one interpreted as radiation from a polar cap heated by relativistic particles. We have fitted the soft spectrum using a detailed magnetized hydrogen atmosphere model. The fitting parameters are the hydrogen column density, the effective temperature T(sub eff), the gravitational redshift z, and the distance to radius ratio, for different values of the magnetic field B. The best fits for this model are obtained when B less than or approximately 1 x 10(exp 12) G and z lies on the upper boundary of the explored range (z = 0.45). The values of T(sub eff) approximately = (2-3) x 10(exp 5) K are a factor of 2-3 times lower than the value of T(sub eff) obtained for blackbody fits with the same z. The lower T(sub eff) increases the compatibility with some proposed schemes for fast neutrino cooling of neutron stars (NSs) by the direct Urca process or by exotic matter, but conventional cooling cannot be excluded. The hydrogen atmosphere fits also imply a smaller distance to Geminga than that inferred from a blackbody fit. An accurate evaluation of the distance would require a better knowledge of the ROSAT Position Sensitive Proportional Counter (PSPC) response to the low-energy region of the incident spectrum. Our modeling of the soft component with a cooler magnetized atmosphere also implies that the hard-component fit requires a characteristic temperature which is higher (by a factor of approximately 2-3) and a surface area which is smaller (by a factor of 10(exp 3), compared to previous blackbody fits.

  14. Finite population size effects in quasispecies models with single-peak fitness landscape

    NASA Astrophysics Data System (ADS)

    Saakian, David B.; Deem, Michael W.; Hu, Chin-Kun

    2012-04-01

    We consider finite population size effects for Crow-Kimura and Eigen quasispecies models with single-peak fitness landscape. We formulate accurately the iteration procedure for the finite population models, then derive the Hamilton-Jacobi equation (HJE) to describe the dynamic of the probability distribution. The steady-state solution of HJE gives the variance of the mean fitness. Our results are useful for understanding the population sizes of viruses in which the infinite population models can give reliable results for biological evolution problems.

  15. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  16. First-Order Quantum Phase Transition for Dicke Model Induced by Atom-Atom Interaction

    NASA Astrophysics Data System (ADS)

    Zhao, Xiu-Qin; Liu, Ni; Liang, Jiu-Qing

    2017-05-01

    In this article, we use the spin coherent state transformation and the ground state variational method to theoretically calculate the ground function. In order to consider the influence of the atom-atom interaction on the extended Dicke model’s ground state properties, the mean photon number, the scaled atomic population and the average ground energy are displayed. Using the self-consistent field theory to solve the atom-atom interaction, we discover the system undergoes a first-order quantum phase transition from the normal phase to the superradiant phase, but a famous Dicke-type second-order quantum phase transition without the atom-atom interaction. Meanwhile, the atom-atom interaction makes the phase transition point shift to the lower atom-photon collective coupling strength. Supported by the National Natural Science Foundation of China under Grant Nos. 11275118, 11404198, 91430109, 61505100, 51502189, and the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi Province (STIP) under Grant No. 2014102, and the Launch of the Scientific Research of Shanxi University under Grant No. 011151801004, and the National Fundamental Fund of Personnel Training under Grant No. J1103210. The Natural Science Foundation of Shanxi Province under Grant No. 2015011008

  17. Modeling Emission of Heavy Energetic Neutral Atoms from the Heliosphere

    NASA Astrophysics Data System (ADS)

    Swaczyna, Paweł; Bzowski, Maciej

    2017-09-01

    Observations of energetic neutral atoms (ENAs) are a fruitful tool for remote diagnosis of the plasma in the heliosphere and its vicinity. So far, instruments detecting ENAs from the heliosphere were configured for observations of hydrogen atoms. Here, we estimate emissions of ENAs of the heavy chemical elements helium, oxygen, nitrogen, and neon. A large portion of the heliospheric ENAs is created in the inner heliosheath from neutralized interstellar pick-up ions (PUIs). We modeled this process and calculated full-sky intensities of ENAs for energies 0.2–130 keV/nuc. We found that the largest fluxes among considered species are expected for helium, smaller for oxygen and nitrogen, and smallest for neon. The obtained intensities are 50–106 times smaller than the hydrogen ENA intensities observed by IBEX. The detection of heavy ENAs will be possible if a future ENA detector is equipped with the capability to measure the masses of observed atoms. Because of different reaction cross-sections among the different species, observations of heavy ENAs can allow for a better understanding of global structure of the heliosphere as well as the transport and energization of PUIs in the heliosphere.

  18. Assessment of Some Atomization Models Used in Spray Calculations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Bulzin, Dan

    2011-01-01

    The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.

  19. Modelling laser-atom interactions in the strong field regime

    NASA Astrophysics Data System (ADS)

    Galstyan, Alexander; Popov, Yuri V.; Mota-Furtado, Francisca; O'Mahony, Patrick F.; Janssens, Noël; Jenkins, Samuel D.; Chuluunbaatar, Ochbadrakh; Piraux, Bernard

    2017-04-01

    We consider the ionisation of atomic hydrogen by a strong infrared field. We extend and study in more depth an existing semi-analytical model. Starting from the time-dependent Schrödinger equation in momentum space and in the velocity gauge we substitute the kernel of the non-local Coulomb potential by a sum of N separable potentials, each of them supporting one hydrogen bound state. This leads to a set of N coupled one-dimensional linear Volterra integral equations to solve. We analyze the gauge problem for the model, the different ways of generating the separable potentials and establish a clear link with the strong field approximation which turns out to be a limiting case of the present model. We calculate electron energy spectra as well as the time evolution of electron wave packets in momentum space. We compare and discuss the results obtained with the model and with the strong field approximation and examine in this context the role of excited states. Contribution to the Topical Issue "Many Particle Spectroscopy of Atoms, Molecules, Clusters and Surfaces", edited by A.N. Grum-Grzhimailo, E.V. Gryzlova, Yu V. Popov, and A.V. Solov'yov.

  20. Quantum Rabi model in the Brillouin zone with ultracold atoms

    NASA Astrophysics Data System (ADS)

    Felicetti, Simone; Rico, Enrique; Sabin, Carlos; Ockenfels, Till; Koch, Johannes; Leder, Martin; Grossert, Christopher; Weitz, Martin; Solano, Enrique

    2017-01-01

    The quantum Rabi model describes the interaction between a two-level quantum system and a single bosonic mode. We propose a method to perform a quantum simulation of the quantum Rabi model, introducing an implementation of the two-level system provided by the occupation of Bloch bands in the first Brillouin zone by ultracold atoms in tailored optical lattices. The effective qubit interacts with a quantum harmonic oscillator implemented in an optical dipole trap. Our realistic proposal allows one to experimentally investigate the quantum Rabi model for extreme parameter regimes, which are not achievable with natural light-matter interactions. When the simulated wave function exceeds the validity region of the simulation, we identify a generalized version of the quantum Rabi model in a periodic phase space.

  1. Back Fitting, Multi-Model Ensembles and Post-Normal Science

    NASA Astrophysics Data System (ADS)

    Rogers, N. L.

    2011-12-01

    The IPCC projections / predictions of the climate future rest on a dual foundation. 1) The climate models are calibrated and tested by back fitting them to the observed 20th century climate. 2) The model outputs from various modeling groups are averaged together into a multi-model ensemble. There are some problems. It is well known that just because a model can be well back fit to historical data, that is not necessarily proof that the model can predict the future. In the case of climate models they cannot be tested against the future in the near future. The IPCC further fogs the situation by taking the extraordinary step of allowing each climate modeling group to invent its own historical 20th century data, for example aerosol history and forcing. (1) Inconsistent physical assumptions concerning the physics of the Earth's climate are also permitted, for example ocean heat storage and coupling of the oceans and the atmosphere. (2) The use of multi-model ensembles is justified principally on the grounds that the multi-model ensemble gives a better back fit than any individual model. This is itself confused since the different models use different 20th century data sets. Further the better fit to some historical data, for example temperature, is easily explained by elementary curve fitting theory, a more economical explanation than the explanation, translated from IPCC jargon: that it works for mysterious reasons not fully understood. Post-normal science is the doctrine that it is permissible to manipulate science in the pursuit of political goals. In other words, the political goals are so important that it is permissible to fudge the science to promote them. Of course the advocates of post-normal science have a much longer and more persuasive explanation, but that is the essence. One must ask if the IPCC is practicing post normal science.

  2. Multiple likelihood estimation for calibration: tradeoffs in goodness-of-fit metrics for watershed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Price, K.; Purucker, T.; Kraemer, S.; Babendreier, J. E.

    2011-12-01

    Four nested sub-watersheds (21 to 10100 km^2) of the Neuse River in North Carolina are used to investigate calibration tradeoffs in goodness-of-fit metrics using multiple likelihood methods. Calibration of watershed hydrologic models is commonly achieved by optimizing a single goodness-of-fit metric to characterize simulated versus observed flows (e.g., R^2 and Nash-Sutcliffe Efficiency Coefficient, or NSE). However, each of these objective functions heavily weights a particular aspect of streamflow. For example, NSE and R^2 both emphasize high flows in evaluating simulation fit, while the Modified Nash-Sutcliffe Efficiency Coefficient (MNSE) emphasizes low flows. Other metrics, such as the ratio of the simulated versus observed flow standard deviations (SDR), prioritize overall flow variability. In this comparison, we use informal likelihood methods to investigate the tradeoffs of calibrating streamflow on three standard goodness-of-fit metrics (NSE, MNSE, and SDR), as well as an index metric that equally weights these three objective functions to address a range of flow characteristics. We present a flexible method that allows calibration targets to be determined by modeling goals. In this process, we begin by using Latin Hypercube Sampling (LHS) to reduce the simulations required to explore the full parameter space. The correlation structure of a large suite of goodness-of-fit metrics is explored to select metrics for use in an index function that incorporates a range of flow characteristics while avoiding redundancy. An iterative informal likelihood procedure is used to narrow parameter ranges after each simulation set to areas of the range with the most support from the observed data. A stopping rule is implemented to characterize the overall goodness-of-fit associated with the parameter set for each pass, with the best-fit pass distributions used as the calibrated set for the next simulation set. This process allows a great deal of flexibility. The process is

  3. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  4. Status Characteristics and Expectation States: Fitting and Testing a Recent Model.

    ERIC Educational Resources Information Center

    Fox, John; Moore, James C., Jr.

    1979-01-01

    Fourteen experimental studies were reviewed using linear model of Berger et al. The model fits the data from these experiments remarkably well. These results demonstrate the utility and apparent validity of this theory of status-organizing processes. (Author/RD)

  5. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  6. Parameter Recovery and Model Fit Using Multidimensional Composites: A Comparison of Four Empirical Parceling Algorithms

    ERIC Educational Resources Information Center

    Rogers, William M.; Schmitt, Neal

    2004-01-01

    Manifest variables in covariance structure analysis are often combined to form parcels for use as indicators in a measurement model. The purpose of the present study was to evaluate four empirical algorithms for creating such parcels, focusing on the effects of dimensionality on accuracy of parameter estimation and model fit. Results suggest that…

  7. Genetic Model Fitting in IQ, Assortative Mating & Components of IQ Variance.

    ERIC Educational Resources Information Center

    Capron, Christiane; Vetta, Adrian R.; Vetta, Atam

    1998-01-01

    The biometrical school of scientists who fit models to IQ data traces their intellectual ancestry to R. Fisher (1918), but their genetic models have no predictive value. Fisher himself was critical of the concept of heritability, because assortative mating, such as for IQ, introduces complexities into the study of a genetic trait. (SLD)

  8. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  9. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  10. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  11. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  12. A Short Commentary on "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Gentry, Marcia

    2010-01-01

    This article presents the author's brief comment on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib (2010) takes the reader through an interesting history of human innovation and processes and situates his theory within a productivist model. The deliberate attention to…

  13. Automated fit quantification of tibial nail designs during the insertion using computer three-dimensional modelling.

    PubMed

    Amarathunga, Jayani P; Schuetz, Michael A; Yarlagadda, Prasad Kvd; Schmutz, Beat

    2014-12-01

    Intramedullary nailing is the standard fixation method for displaced diaphyseal fractures of the tibia. An optimal nail design should both facilitate insertion and anatomically fit the bone geometry at its final position in order to reduce the risk of stress fractures and malalignments. Due to the nonexistence of suitable commercial software, we developed a software tool for the automated fit assessment of nail designs. Furthermore, we demonstrated that an optimised nail, which fits better at the final position, is also easier to insert. Three-dimensional models of two nail designs and 20 tibiae were used. The fitting was quantified in terms of surface area, maximum distance, sum of surface areas and sum of maximum distances by which the nail was protruding into the cortex. The software was programmed to insert the nail into the bone model and to quantify the fit at defined increment levels. On average, the misfit during the insertion in terms of the four fitting parameters was smaller for the Expert Tibial Nail Proximal bend (476.3 mm(2), 1.5 mm, 2029.8 mm(2), 6.5 mm) than the Expert Tibial Nail (736.7 mm(2), 2.2 mm, 2491.4 mm(2), 8.0 mm). The differences were statistically significant (p ≤ 0.05). The software could be used by nail implant manufacturers for the purpose of implant design validation.

  14. Model dependence of single-energy fits to pion photoproduction data

    NASA Astrophysics Data System (ADS)

    Workman, R. L.; Paris, M. W.; Briscoe, W. J.; Tiator, L.; Schumann, S.; Ostrick, M.; Kamalov, S. S.

    2011-11-01

    Model dependence of multipole analysis has been explored through energy-dependent and single-energy fits to pion photoproduction data. The MAID energy-dependent solution has been used as input for an event generator producing realistic pseudo data. These were fitted using the SAID parametrization approach to determine single-energy and energy-dependent solutions over a range of lab photon energies from 200 to 1200MeV. The resulting solutions were found to be consistent with the input amplitudes from MAID. Fits with a χ-squared per datum of unity or less were generally achieved. We discuss energy regions where consistent results are expected, and explore the sensitivity of fits to the number of included single- and double-polarization observables. The influence of Watson's theorem is examined in detail.

  15. Development and design of a late-model fitness test instrument based on LabView

    NASA Astrophysics Data System (ADS)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  16. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields.

    PubMed

    Whitford, Paul C; Noel, Jeffrey K; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y; Onuchic, José N

    2009-05-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Go) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase, and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a C(alpha) structure-based model and an all-atom empirical forcefield. Key findings include: (1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature, (2) folding mechanisms are robust to variations of the energetic parameters, (3) protein folding free-energy barriers can be manipulated through parametric modifications, (4) the global folding mechanisms in a C(alpha) model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model, and (5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Because this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function.

  17. An All-atom Structure-Based Potential for Proteins: Bridging Minimal Models with All-atom Empirical Forcefields

    PubMed Central

    Whitford, Paul C.; Noel, Jeffrey K.; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y.; Onuchic, José N.

    2012-01-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Gō) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a Cα structure-based model and an all-atom empirical forcefield. Key findings include 1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature 2) folding mechanisms are robust to variations of the energetic parameters 3) protein folding free energy barriers can be manipulated through parametric modifications 4) the global folding mechanisms in a Cα model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model 5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Since this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function. PMID:18837035

  18. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  19. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  20. Space charge modeling in electron-beam irradiated polyethylene: Fitting model and experiments

    SciTech Connect

    Le Roy, S.; Laurent, C.; Teyssedre, G.; Baudoin, F.; Griseri, V.

    2012-07-15

    A numerical model for describing charge accumulation in electron-beam irradiated low density polyethylene has been put forward recently. It encompasses the generation of positive and negative charges due to impinging electrons and their transport in the insulation. However, the model was not optimized to fit all the data available regarding space charge dynamics obtained using up-to-date pulsed electro-acoustic techniques. In the present approach, model outputs are compared with experimental space charge distribution obtained during irradiation and post-irradiation, the irradiated samples being in short circuit conditions or with the irradiated surface at a floating potential. A unique set of parameters have been used for all the simulations, and it encompasses the transport parameters already optimized for charge transport in polyethylene under an external electric field. The model evolution in itself consists in describing the recombination between positive and negative charges according to the Langevin formula, which is physically more accurate than the previous description and has the advantage of reducing the number of adjustable parameters of the model. This also provides a better description of the experimental behavior underlining the importance of recombination processes in irradiated materials.

  1. Space charge modeling in electron-beam irradiated polyethylene: Fitting model and experiments

    NASA Astrophysics Data System (ADS)

    Le Roy, S.; Baudoin, F.; Griseri, V.; Laurent, C.; Teyssèdre, G.

    2012-07-01

    A numerical model for describing charge accumulation in electron-beam irradiated low density polyethylene has been put forward recently. It encompasses the generation of positive and negative charges due to impinging electrons and their transport in the insulation. However, the model was not optimized to fit all the data available regarding space charge dynamics obtained using up-to-date pulsed electro-acoustic techniques. In the present approach, model outputs are compared with experimental space charge distribution obtained during irradiation and post-irradiation, the irradiated samples being in short circuit conditions or with the irradiated surface at a floating potential. A unique set of parameters have been used for all the simulations, and it encompasses the transport parameters already optimized for charge transport in polyethylene under an external electric field. The model evolution in itself consists in describing the recombination between positive and negative charges according to the Langevin formula, which is physically more accurate than the previous description and has the advantage of reducing the number of adjustable parameters of the model. This also provides a better description of the experimental behavior underlining the importance of recombination processes in irradiated materials.

  2. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    PubMed

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants. © 2015 Authors; published by Portland Press Limited.

  3. Shot model parameters for Cygnus X-1 through phase portrait fitting

    NASA Technical Reports Server (NTRS)

    Lochner, James C.; Swank, J. H.; Szymkowiak, A. E.

    1991-01-01

    Shot models for systems having about 1/f power density spectrum are developed by utilizing a distribution of shot durations. Parameters of the distribution are determined by fitting the power spectrum either with analytic forms for the spectrum of a shot model with a given shot profile, or with the spectrum derived from numerical realizations of trial shot models. The shot fraction is specified by fitting the phase portrait, which is a plot of intensity at a given time versus intensity at a delayed time and in principle is sensitive to different shot profiles. These techniques have been extensively applied to the X-ray variability of Cygnus X-1, using HEAO 1 A-2 and an Exosat ME observation. The power spectra suggest models having characteristic shot durations lasting from milliseconds to a few seconds, while the phase portrait fits give shot fractions of about 50 percent. Best fits to the portraits are obtained if the amplitude of the shot is a power-law function of the duration of the shot. These fits prefer shots having a symmetric exponential rise and decay. Results are interpreted in terms of a distribution of magnetic flares in the accretion disk.

  4. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    PubMed

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  5. Fitting of adaptive neuron model to electrophysiological recordings using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Zhang, Lvxia; Deng, Bin; Wei, Xile

    2017-02-01

    In order to fit neural model’s spiking features to electrophysiological recordings, in this paper, a fitting framework based on particle swarm optimization (PSO) algorithm is proposed to estimate the model parameters in an augmented multi-timescale adaptive threshold (AugMAT) model. PSO algorithm is an advanced evolutionary calculation method based on iteration. Selecting a reasonable criterion function will ensure the effectiveness of PSO algorithm. In this work, firing rate information is used as the main spiking feature and the estimation error of firing rate is selected as the criterion for fitting. A series of simulations are presented to verify the performance of the framework. The first step is model validation; an artificial training data is introduced to test the fitting procedure. Then we talk about the suitable PSO parameters, which exhibit adequate compromise between speediness and accuracy. Lastly, this framework is used to fit the electrophysiological recordings, after three adjustment steps, the features of experimental data are translated into realistic spiking neuron model.

  6. Can a first-order exponential decay model fit heart rate recovery after resistance exercise?

    PubMed

    Bartels-Ferreira, Rhenan; de Sousa, Élder D; Trevizani, Gabriela A; Silva, Lilian P; Nakamura, Fábio Y; Forjaz, Cláudia L M; Lima, Jorge Roberto P; Peçanha, Tiago

    2015-03-01

    The time-constant of postexercise heart rate recovery (HRRτ ) obtained by fitting heart rate decay curve by a first-order exponential fitting has being used to assess cardiac autonomic recovery after endurance exercise. The feasibility of this model was not tested after resistance exercise (RE). The aim of this study was to test the goodness of fit of the first-order exponential decay model to fit heart rate recovery (HRR) after RE. Ten healthy subjects participated in the study. The experimental sessions occurred in two separated days and consisted of performance of 1 set of 10 repetitions at 50% or 80% of the load achieved on the one-repetition maximum test [low-intensity (LI) and high-intensity (HI) sessions, respectively]. Heart rate (HR) was continuously registered before and during exercise and also for 10 min of recovery. A monoexponential equation was used to fit the HRR curve during the postexercise period using different time windows (i.e. 30, 60, 90, … 600 s). For each time window, (i) HRRτ was calculated and (ii) variation of HR explained by the model (R(2) goodness of fit index) was assessed. The HRRτ showed stabilization from 360 and 420 s on LI and HI, respectively. Acceptable R(2) values were observed from the 360 s on LI (R(2) > 0.65) and at all tested time windows on HI (R(2) > 0.75). In conclusion, this study showed that using a minimum length of monitoring (~420 s) HRR after RE can be adequately modelled by a first-order exponential fitting. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  7. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  8. Curve fitting toxicity test data: Which comes first, the dose response or the model?

    SciTech Connect

    Gully, J.; Baird, R.; Bottomley, J.

    1995-12-31

    The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.

  9. Quantum Rabi model for N-state atoms.

    PubMed

    Albert, Victor V

    2012-05-04

    A tractable N-state Rabi Hamiltonian is introduced by extending the parity symmetry of the two-state model. The single-mode case provides a few-parameter description of a novel class of periodic systems, predicting that the ground state of certain four-state atom-cavity systems will undergo parity change at strong-coupling. A group-theoretical treatment provides physical insight into dynamics and a modified rotating wave approximation obtains accurate analytical energies. The dissipative case can be applied to study excitation energy transfer in molecular rings or chains.

  10. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    PubMed

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  11. Model fit to experimental data for foam-assisted deep vadose zone remediation.

    PubMed

    Roostapour, A; Lee, G; Zhong, L; Kam, S I

    2014-01-15

    This study investigates how a foam model, developed in Roostapour and Kam [1], can be applied to make a fit to a set of existing laboratory flow experiments in an application relevant to deep vadose zone remediation. This study reveals a few important insights regarding foam-assisted deep vadose zone remediation: (i) the mathematical framework established for foam modeling can fit typical flow experiments matching wave velocities, saturation history, and pressure responses; (ii) the set of input parameters may not be unique for the fit, and therefore conducting experiments to measure basic model parameters related to relative permeability, initial and residual saturations, surfactant adsorption and so on should not be overlooked; and (iii) gas compressibility plays an important role for data analysis, thus should be handled carefully in laboratory flow experiments. Foam kinetics, causing foam texture to reach its steady-state value slowly, may impose additional complications.

  12. Bounds on collapse models from cold-atom experiments

    NASA Astrophysics Data System (ADS)

    Bilardello, Marco; Donadi, Sandro; Vinante, Andrea; Bassi, Angelo

    2016-11-01

    The spontaneous localization mechanism of collapse models induces a Brownian motion in all physical systems. This effect is very weak, but experimental progress in creating ultracold atomic systems can be used to detect it. In this paper, we considered a recent experiment (Kovachy et al., 2015), where an atomic ensemble was cooled down to picokelvins. Any Brownian motion induces an extra increase of the position variance of the gas. We study this effect by solving the dynamical equations for the Continuous Spontaneous Localizations (CSL) model, as well as for its non-Markovian and dissipative extensions. The resulting bounds, with a 95 % of confidence level, are beaten only by measurements of spontaneous X-ray emission and by experiments with cantilever (in the latter case, only for rC ≥ 10-7 m, where rC is one of the two collapse parameters of the CSL model). We show that, contrary to the bounds given by X-ray measurements, non-Markovian effects do not change the bounds, for any reasonable choice of a frequency cutoff in the spectrum of the collapse noise. Therefore the bounds here considered are more robust. We also show that dissipative effects are unimportant for a large spectrum of temperatures of the noise, while for low temperatures the excluded region in the parameter space is the more reduced, the lower the temperature.

  13. Fully variational average atom model with ion-ion correlations.

    PubMed

    Starrett, C E; Saumon, D

    2012-02-01

    An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.

  14. Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.

    PubMed

    Mohammed, Goran Abdulrahman; Hou, Ming

    2016-03-01

    The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies.

  15. A goodness-of-fit test for occupancy models with correlated within-season revisits

    USGS Publications Warehouse

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  16. Low energy neutral atoms in the earth's magnetosphere: Modeling

    SciTech Connect

    Moore, K.R.; McComas, D.J.; Funsten, H.O.; Thomsen, M.F.

    1992-01-01

    Detection of low energy neutral atoms (LENAs) produced by the interaction of the Earth's geocorona with ambient space plasma has been proposed as a technique to obtain global information about the magnetosphere. Recent instrumentation advances reported previously and in these proceedings provide an opportunity for detecting LENAs in the energy range of <1 keV to {approximately}50 keV. In this paper, we present results from a numerical model which calculates line of sight LENA fluxes expected at a remote orbiting spacecraft for various magnetospheric plasma regimes. This model uses measured charge exchange cross sections, either of two neural hydrogen geocorona models, and various empirical modes of the ring current and plasma sheet to calculate the contribution to the integrated directional flux from each point along the line of sight of the instrument. We discuss implications for LENA imaging of the magnetosphere based on these simulations. 22 refs.

  17. Numerical modeling for primary atomization of liquid jets

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Chuech, S. G.; Singhal, A. K.

    1989-01-01

    In the proposed numerical model for primary atomization, surface-wave dispersion equations are solved in conjunction with the jet-embedding technique of solving mean flow equations of a liquid jet. Linear and approximate nonlinear models have been considered. In each case, the dispersion equation is solved over the whole wavelength spectrum to predict drop sizes, frequency, and liquid-mass breakup rates without using any empirical constants. The present model has been applied to several low-speed and high-speed jets. For the high-speed case (the LOX/H2 coaxial injector of the Space Shuttle Main Engine Preburner), predicted drop sizes and liquid breakup rates are in good agreement with the results of the CICM code, which have been calibrated against measured data.

  18. Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses.

    PubMed

    Liu, De Li; An, Min; Johnson, Ian R; Lovett, John V

    2003-01-01

    Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to alpha-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf.

  19. Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses

    PubMed Central

    Liu, De Li; An, Min; Johnson, Ian R.; Lovett, John V.

    2003-01-01

    Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to α-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf. PMID:19330111

  20. Fitting and circuit modeling to the attenuation characteristics of logging cable

    NASA Astrophysics Data System (ADS)

    Yan, Jingfu; Zou, Qingyan; Liu, Dejun

    2017-09-01

    In order to get a previous understanding about data transmission effect through a long well-logging cable (about 7000 m), a lumped parameter circuit simulating the attenuation characteristics of well-logging cable was designed in this paper. Based on the actual attenuation measurement at every discrete frequency point, the analytic expression of transfer function of the cable was firstly obtained by nonlinear fitting method, then a physically realizable circuit model corresponding to the transfer function was thus established. According to the simulation results, the circuit model shows good fitting effect to the cable attenuation characteristics.

  1. Efficient occupancy model-fitting for extensive citizen-science data.

    PubMed

    Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  2. Efficient occupancy model-fitting for extensive citizen-science data

    PubMed Central

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  3. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    PubMed

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  4. A comparison of fitting growth models with a genetic algorithm and nonlinear regression.

    PubMed

    Roush, W B; Branton, S L

    2005-03-01

    A genetic algorithm (GA), an optimization procedure based on the theory of evolution, was compared with nonlinear regression for the ability of the 2 algorithms to fit the coefficients of poultry growth models. It was hypothesized that the nonlinear approach of using GA to define the parameters of growth equations would better fit the growth equations than the use of nonlinear regression. Two sets of growth data from the literature, consisting of male broiler BW grown for 168 and 170 d, were used in the study. The growth data were fit to 2 forms of the logistic model, the Gompertz, the Gompertz-Laird, and the saturated kinetic models using the SAS nonlinear algorithm (NLIN) procedure and a GA. There were no statistical differences for the comparison of the residuals (the difference between observed and predicted BWs) of growth models fit by a GA or nonlinear regression. The plotted residuals for the nonlinear regression and GA-determined growth values confirmed observations of others that the residuals have oscillations resembling sine waves that are not represented by the growth models. It was found that GA could successfully determine the coefficients of growth equations. A disadvantage of slowness in converging to the solution was found for the GA. The advantage of GA over traditional nonlinear regression is that only ranges need be specified for the parameters of the growth equations, whereas estimates of the coefficients need to be determined, and in some programs the derivatives of the growth equations need to be identified. Depending on the goal of the research, solving multivariable complex functions with an algorithm that considers several solutions at the same time in an evolutionary mode can be considered an advantage especially where there is a chance for the solution to converge on a local optimum when a global optimum is desired. It was concluded that the fitting of the growth equations was not so much a problem with the fitting methodology as it is

  5. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    PubMed

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  6. A generalized model of atomic processes in dense plasmas

    NASA Astrophysics Data System (ADS)

    Chung, Hyun-Kyung; Chen, M.; Ciricosta, O.; Vinko, S.; Wark, J.; Lee, R. W.

    2015-11-01

    A generalized model of atomic processes in plasmas, FLYCHK, has been developed over a decade to provide experimentalists fast and simple but reasonable predictions of atomic properties of plasmas. For a given plasma condition, it provides charge state distributions and spectroscopic properties, which have been extensively used for experimental design and data analysis and currently available through NIST web site. In recent years, highly transient and non-equilibrium plasmas have been created with X-ray free electron lasers (XFEL). As high intensity x-rays interact with matter, the inner-shell electrons are ionized and Auger electrons and photo electrons are generated. With time, electrons participate in the ionization processes and collisional ionization by these electrons dominates photoionization as electron density increases. To study highly complex XFEL produced plasmas, SCFLY, an extended version of FLYCHK code has been used. The code accepts the time-dependent history of x-ray energy and intensity to compute population distribution and ionization distribution self-consistently with electron temperature and density assuming an instantaneous equilibration. The model and its applications to XFEL experiments will be presented as well as its limitations.

  7. Estimating the pi* goodness of fit index for finite mixtures of item response models.

    PubMed

    Revuelta, Javier

    2008-05-01

    Testing the fit of finite mixture models is a difficult task, since asymptotic results on the distribution of likelihood ratio statistics do not hold; for this reason, alternative statistics are needed. This paper applies the pi* goodness of fit statistic to finite mixture item response models. The pi* statistic assumes that the population is composed of two subpopulations - those that follow a parametric model and a residual group outside the model; pi* is defined as the proportion of population in the residual group. The population was divided into two or more groups, or classes. Several groups followed an item response model and there was also a residual group. The paper presents maximum likelihood algorithms for estimating item parameters, the probabilities of the groups and pi*. The paper also includes a simulation study on goodness of recovery for the two- and three-parameter logistic models and an example with real data from a multiple choice test.

  8. Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models

    PubMed Central

    Yi, Nengjun; Ma, Shuangge

    2013-01-01

    Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052

  9. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  10. Comparing PyMorph and SDSS photometry. I. Background sky and model fitting effects

    NASA Astrophysics Data System (ADS)

    Fischer, J.-L.; Bernardi, M.; Meert, A.

    2017-01-01

    A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the SDSS pipeline. This is because of a combination of three effects: one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤0.1 mags even for the most luminous galaxies. The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that PyMorph sky estimates are fainter than those of the SDSS DR7 or DR9 pipelines, but are in excellent agreement with the estimates of Blanton et al. (2011). Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥7 arcseconds. In the SDSS main galaxy sample these are typically luminous galaxies, so they are not necessarily nearby. This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When PyMorph sky values are used, then two component Sersic-Exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ˜0.2 mag), but less light than single Sersic fits (0.1 mag). Finally, we show that PyMorph fits of Meert et al. (2015) to DR7 data remain valid for DR9 images. Our findings show that, especially at large luminosities, these PyMorph estimates should be preferred to the SDSS pipeline values.

  11. Comparing pymorph and SDSS photometry - I. Background sky and model fitting effects

    NASA Astrophysics Data System (ADS)

    Fischer, J.-L.; Bernardi, M.; Meert, A.

    2017-05-01

    A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the Sloan Digital Sky Survey (SDSS) pipeline. This is because of a combination of three effects: one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤0.1 mag even for the most luminous galaxies. The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that pymorph sky estimates are fainter than those of the Sloan Digital Sky Servey Data Release 7 or Data Release 9 pipelines, but are in excellent agreement with the estimates of Blanton et al. Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥7 arcsec. In the SDSS main galaxy sample, these are typically luminous galaxies, so they are not necessarily nearby. This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When pymorph sky values are used, then two-component Sérsic-exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ˜0.2 mag), but less light than single Sérsic fits (0.1 mag). Finally, we show that pymorph fits of Meert et al. to DR7 data remain valid for DR9 images. Our findings show that, especially at large luminosities, these pymorph estimates should be preferred to the SDSS pipeline values.

  12. Non-Uniqueness of the Geometry of Interplanetary Magnetic Flux Ropes Obtained from Model-Fitting

    NASA Astrophysics Data System (ADS)

    Marubashi, K.; Cho, K.-S.

    2015-12-01

    Since the early recognition of the important role of interplanetary magnetic flux ropes (IPFRs) to carry the southward magnetic fields to the Earth, many attempts have been made to determine the structure of the IPFRs by model-fitting analyses to the interplanetary magnetic field variations. This paper describes the results of fitting analyses for three selected solar wind structures in the latter half of 2014. In the fitting analysis a special attention was paid to identification of all the possible models or geometries that can reproduce the observed magnetic field variation. As a result, three or four geometries have been found for each of the three cases. The non-uniqueness of the fitted results include (1) the different geometries naturally stemming from the difference in the models used for fitting, and (2) an unexpected result that either of magnetic field chirality, left-handed and right-handed, can reproduce the observation in some cases. Thus we conclude that the model-fitting cannot always give us a unique geometry of the observed magnetic flux rope. In addition, we have found that the magnetic field chirality of a flux rope cannot be uniquely inferred from the sense of field vector rotation observed in the plane normal to the Earth-Sun line; the sense of rotation changes depending on the direction of the flux rope axis. These findings exert an important impact on the studies aimed at the geometrical relationships between the flux ropes and the magnetic field structures in the solar corona where the flux ropes were produced, such studies being an important step toward predicting geomagnetic storms based on observations of solar eruption phenomena.

  13. Sampling Kinetic Protein Folding Pathways using All-Atom Models

    NASA Astrophysics Data System (ADS)

    Bolhuis, P. G.

    This chapter summarizes several computational strategies to study the kinetics of two-state protein folding using all atom models. After explaining the background of two state folding using energy landscapes I introduce common protein models and computational tools to study folding thermodynamics and kinetics. Free energy landscapes are able to capture the thermodynamics of two-state protein folding, and several methods for efficient sampling of these landscapes are presented. An accurate estimate of folding kinetics, the main topic of this chapter, is more difficult to achieve. I argue that path sampling methods are well suited to overcome the problems connected to the sampling of folding kinetics. Some of the major issues are illustrated in the case study on the folding of the GB1 hairpin.

  14. A Comparison of Isoconversional and Model-Fitting Approaches to Kinetic Parameter Estimation and Application Predictions

    SciTech Connect

    Burnham, A K

    2006-05-17

    Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.

  15. Testing the Fitness Consequences of the Thermoregulatory and Parental Care Models for the Origin of Endothermy

    PubMed Central

    Clavijo-Baque, Sabrina; Bozinovic, Francisco

    2012-01-01

    The origin of endothermy is a puzzling phenomenon in the evolution of vertebrates. To address this issue several explicative models have been proposed. The main models proposed for the origin of endothermy are the aerobic capacity, the thermoregulatory and the parental care models. Our main proposal is that to compare the alternative models, a critical aspect is to determine how strongly natural selection was influenced by body temperature, and basal and maximum metabolic rates during the evolution of endothermy. We evaluate these relationships in the context of three main hypotheses aimed at explaining the evolution of endothermy, namely the parental care hypothesis and two hypotheses related to the thermoregulatory model (thermogenic capacity and higher body temperature models). We used data on basal and maximum metabolic rates and body temperature from 17 rodent populations, and used intrinsic population growth rate (Rmax) as a global proxy of fitness. We found greater support for the thermogenic capacity model of the thermoregulatory model. In other words, greater thermogenic capacity is associated with increased fitness in rodent populations. To our knowledge, this is the first test of the fitness consequences of the thermoregulatory and parental care models for the origin of endothermy. PMID:22606328

  16. The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs

    SciTech Connect

    Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske

    2011-06-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.

  17. Beyond modeling: all-atom olfactory receptor model simulations.

    PubMed

    Lai, Peter C; Crasto, Chiquito J

    2012-01-01

    Olfactory receptors (ORs) are a type of GTP-binding protein-coupled receptor (GPCR). These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level beyond inferences that are drawn merely from static docking. Here we have shown the specific advantages of simulating the dynamic environment associated with OR-odorant interactions. We present a rigorous protocol which ranges from the creation of a computationally derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.

  18. Model of spacecraft atomic oxygen and solar exposure microenvironments

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Pippin, H. G.

    1993-01-01

    Computer models of environmental conditions in Earth orbit are needed for the following reasons: (1) derivation of material performance parameters from orbital test data, (2) evaluation of spacecraft hardware designs, (3) prediction of material service life, and (4) scheduling spacecraft maintenance. To meet these needs, Boeing has developed programs for modeling atomic oxygen (AO) and solar radiation exposures. The model allows determination of AO and solar ultraviolet (UV) radiation exposures for spacecraft surfaces (1) in arbitrary orientations with respect to the direction of spacecraft motion, (2) overall ranges of solar conditions, and (3) for any mission duration. The models have been successfully applied to prediction of experiment environments on the Long Duration Exposure Facility (LDEF) and for analysis of selected hardware designs for deployment on other spacecraft. The work on these models has been reported at previous LDEF conferences. Since publication of these reports, a revision has been made to the AO calculation for LDEF, and further work has been done on the microenvironments model for solar exposure.

  19. Four-component united-atom model of bitumen.

    PubMed

    Hansen, J S; Lemarchand, Claire A; Nielsen, Erik; Dyre, Jeppe C; Schrøder, Thomas

    2013-03-07

    We propose a four-component united-atom molecular model of bitumen. The model includes realistic chemical constituents and introduces a coarse graining level that suppresses the highest frequency modes. Molecular dynamics simulations of the model are carried out using graphic-processor-units based software in time spans in order of microseconds, which enables the study of slow relaxation processes characterizing bitumen. This paper also presents results of the model dynamics as expressed through the mean-square displacement, the stress autocorrelation function, and rotational relaxation. The diffusivity of the individual molecules changes little as a function of temperature and reveals distinct dynamical time scales. Different time scales are also observed for the rotational relaxation. The stress autocorrelation function features a slow non-exponential decay for all temperatures studied. From the stress autocorrelation function, the shear viscosity and shear modulus are evaluated, showing a viscous response at frequencies below 100 MHz. The model predictions of viscosity and diffusivities are compared to experimental data, giving reasonable agreement. The model shows that the asphaltene, resin, and resinous oil tend to form nano-aggregates. The characteristic dynamical relaxation time of these aggregates is larger than that of the homogeneously distributed parts of the system, leading to strong dynamical heterogeneity.

  20. Four-component united-atom model of bitumen

    NASA Astrophysics Data System (ADS)

    Hansen, J. S.; Lemarchand, Claire A.; Nielsen, Erik; Dyre, Jeppe C.; Schrøder, Thomas

    2013-03-01

    We propose a four-component united-atom molecular model of bitumen. The model includes realistic chemical constituents and introduces a coarse graining level that suppresses the highest frequency modes. Molecular dynamics simulations of the model are carried out using graphic-processor-units based software in time spans in order of microseconds, which enables the study of slow relaxation processes characterizing bitumen. This paper also presents results of the model dynamics as expressed through the mean-square displacement, the stress autocorrelation function, and rotational relaxation. The diffusivity of the individual molecules changes little as a function of temperature and reveals distinct dynamical time scales. Different time scales are also observed for the rotational relaxation. The stress autocorrelation function features a slow non-exponential decay for all temperatures studied. From the stress autocorrelation function, the shear viscosity and shear modulus are evaluated, showing a viscous response at frequencies below 100 MHz. The model predictions of viscosity and diffusivities are compared to experimental data, giving reasonable agreement. The model shows that the asphaltene, resin, and resinous oil tend to form nano-aggregates. The characteristic dynamical relaxation time of these aggregates is larger than that of the homogeneously distributed parts of the system, leading to strong dynamical heterogeneity.

  1. Source localization with acoustic sensor arrays using generative model based fitting with sparse constraints.

    PubMed

    Velasco, Jose; Pizarro, Daniel; Macias-Guarasa, Javier

    2012-10-15

    This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP) strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  2. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    PubMed Central

    Velasco, Jose; Pizarro, Daniel; Macias-Guarasa, Javier

    2012-01-01

    This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP) strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies. PMID:23202021

  3. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  4. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  5. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  6. Model fitting of the kinematics of ten superluminal components in blazar 3C 279

    NASA Astrophysics Data System (ADS)

    Qian, Shan-Jie

    2013-07-01

    The kinematics of ten superluminal components (C11- C16, C18, C20, C21 and C24) of blazar 3C 279 are studied from VLBI observations. It is shown that their initial trajectory, distance from the core and apparent speed can be well fitted by the precession model proposed by Qian. Combined with the results of the model fit for the six superluminal components (C3, C4, C7a, C8, C9 and C10) already published, the kinematics of sixteen superluminal components can now be consistently interpreted in the precession scenario with their ejection times spanning more than 25 yr (or more than one precession period). The results from model fitting show the possible existence of a common precessing trajectory for these knots within a projected core distance of ~0.2-0.4 mas. In the framework of the jet-precession scenario, we can, for the first time, identify three classes of trajectories which are characterized by their collimation parameters. These different trajectories could be related to the helical structure of magnetic fields in the jet. Through fitting the model, the bulk Lorentz factor, Doppler factor and viewing angle of these knots are derived. It is found that there is no evidence for any correlation between the bulk Lorentz factor of the components and their precession phase (or ejection time). In a companion paper, the kinematics of another seven components (C5a, C6, C7, C17, C19, C22 and C23) have been derived from model fitting, and a binary black-hole/jet scenario was envisaged. The precession model proposed by Qian would be useful for understanding the kinematics of superluminal components in blazar 3C 279 derived from VLBI observations, by disentangling different mechanisms and ingredients. More generally, it might also be helpful for studying the mechanism of jet swing (wobbling) in other blazars.

  7. Hybrid fitting of a hydrosystem model: Long-term insight into the Beauce aquifer functioning (France)

    NASA Astrophysics Data System (ADS)

    Flipo, N.; Monteil, C.; Poulin, M.; de Fouquet, C.; Krimissa, M.

    2012-05-01

    This study aims at analyzing the water budget of the unconfined Beauce aquifer (8000 km2) over a 35 year period, by modeling the hydrological functioning and quantifying exchanged water fluxes inside the system. A distributed process-based model (DPBM) is implemented to model the surface, the unsaturated zone and the aquifer subsystems. Based on an extensive literature review on multiparameter optimization and inverse problem, a pragmatic hybrid fitting method that couples manual and automatic calibration is developed. Three data subsets are used for calibration (10 year), validation (10 year) and test (35 year). The global piezometric head root-mean-square error is around 2.5 m for the three subsets and is rather uniformly spatially distributed over 78 piezometers. The sensitivity of the simulation to the different steps of the calibration process is investigated. The transmissivity field permits the fitting of the low-frequency signal for long-term filtering of the recharge signal, whereas the storage coefficient filters the signal with a higher frequency. For long-term insight into aquifer system functioning, the priority is thus to first fit the transmissivity field and to assess the distributed aquifer recharge accurately. The fitted DPBM, coupled with a linear model of coregionalization, is then used to quantify the hydrosystem water mass balance between 1974 and 2009, indicating that there is yet no trend of water resources decrease neither due to climate nor to human activities.

  8. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  9. The Fit Between Strong-Campbell Interest Inventory General Occupational Themes and Holland's Hexagonal Model.

    ERIC Educational Resources Information Center

    Rounds, James B., Jr.; And Others

    Using a multidimensional scaling procedure, this study examined the fit of Holland's RIASEC hexagon model to the internal relationships among the Strong-Campbell Interest Inventory (SCII) General Occupational Theme scales. SCII intercorrelation matrices for both sexes as reported in the SCII Manual were submitted, separately for each sex, to…

  10. Fitting Item Response Models to the Maryland Functional Reading Test Results.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The potential of item response theory (IRT) for solving a number of testing problems in the Maryland Functional Reading Program would appear to be substantial in view of the many other promising applications of the theory. But, it is well-known that the advantages derived from an IRT model cannot be achieved when the fit between an item response…

  11. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  12. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  13. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  14. Critique of "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    ERIC Educational Resources Information Center

    Harris, Carole Ruth

    2010-01-01

    This article presents the author's comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" In his article, Ghassib (2010) provides an overview of the philosophical foundations that led to exact science, its role in what was later to become a driving force in the modern…

  15. Super Kids--Superfit. A Comprehensive Fitness Intervention Model for Elementary Schools.

    ERIC Educational Resources Information Center

    Virgilio, Stephen J.; Berenson, Gerald S.

    1988-01-01

    Objectives and activities of the cardiovascular (CV) fitness program Super Kids--Superfit are related in this article. This exercise program is one component of the Heart Smart Program, a CV health intervention model for elementary school students. Program evaluation, parent education, and school and community intervention strategies are…

  16. Review of Hisham Ghassib: Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?

    ERIC Educational Resources Information Center

    Neber, Heinz

    2010-01-01

    In this article, the author presents his comments on Hisham Ghassib's article entitled "Where Does Creativity Fit into the Productivist Industrial Model of Knowledge Production?" Ghassib (2010) describes historical transformations of science from a marginal and non-autonomous activity which had been constrained by traditions to a self-autonomous,…

  17. IRT Model Fit Evaluation from Theory to Practice: Progress and Some Unanswered Questions

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2013-01-01

    In this commentary, the authors congratulate Professor Alberto Maydeu-Olivares on his article [EJ1023617: "Goodness-of-Fit Assessment of Item Response Theory Models, Measurement: Interdisciplinary Research and Perspectives," this issue] as it provides a much needed overview on the mathematical underpinnings of the theory behind the…

  18. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  19. Universal Screening for Emotional and Behavioral Problems: Fitting a Population-Based Model

    ERIC Educational Resources Information Center

    Schanding, G. Thomas, Jr.; Nowell, Kerri P.

    2013-01-01

    Schools have begun to adopt a population-based method to conceptualizing assessment and intervention of students; however, little empirical evidence has been gathered to support this shift in service delivery. The present study examined the fit of a population-based model in identifying students' behavioral and emotional functioning using a…

  20. Impact of Missing Data on Person-Model Fit and Person Trait Estimation

    ERIC Educational Resources Information Center

    Zhang, Bo; Walker, Cindy M.

    2008-01-01

    The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…

  1. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  2. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    PubMed Central

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for multilevel models fit to binary data but there are reasons to believe that the results will not always generalize to the ordinal case. This paper thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ when instead fitting multilevel cumulative logit models to ordinal data, Maximum Likelihood (ML) or Penalized Quasi-Likelihood (PQL). ML and PQL are compared across variations in sample size, magnitude of variance components, number of outcome categories, and distribution shape. Fitting a multilevel linear model to ordinal outcomes is shown to be inferior in virtually all circumstances. PQL performance improves markedly with the number of ordinal categories, regardless of distribution shape. In contrast to binary data, PQL often performs as well as ML when used with ordinal data. Further, the performance of PQL is typically superior to ML when the data includes a small to moderate number of clusters (i.e., ≤ 50 clusters). PMID:22040372

  3. Dynamical modeling and multi-experiment fitting with PottersWheel.

    PubMed

    Maiwald, Thomas; Timmer, Jens

    2008-09-15

    Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator-optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Supplementary data are available at Bioinformatics online.

  4. Modeling a semiconductor laser with an intracavity atomic absorber

    SciTech Connect

    Masoller, C.; Vilaseca, R.; Oria, M.

    2009-07-15

    The dynamics of a semiconductor laser with an intracavity atomic absorber is studied numerically. The study is motivated by the experiments of Barbosa et al. [Opt. Lett. 32, 1869 (2007)], using a semiconductor junction as an active medium, with its output face being antireflection coated, and a cell containing cesium vapor placed in a cavity that was closed by a diffraction grating (DG). The DG allowed scanning the lasing frequency across the D{sub 2} line in the Cs spectrum, and different regimes such as frequency bistability or dynamic instability were observed depending on the operating conditions. Here we propose a rate-equation model that takes into account the dispersive losses and the dispersive refractive index change in the laser cavity caused by the presence of the Cs vapor cell. These effects are described through a modification of the complex susceptibility. The numerical results are found to be in qualitative good agreement with some of the observations; however, some discrepancies are also noticed, which can be attributed to multi-longitudinal-mode emission in the experiments. The simulations clearly show the relevant role of the Lamb dips and crossover resonances, which arise on top of the Doppler-broadened D{sub 2} line in the Cs spectrum, and are due to the forward and backward intracavity fields interacting resonantly with the Cs atoms. When the laser frequency is locked in a dip, a reduction in the frequency noise and of the intensity noise is demonstrated.

  5. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    PubMed Central

    Lai, Peter C.; Crasto, Chiquito J.

    2012-01-01

    Olfactory receptors (ORs) are a type of GTP-binding protein-coupled receptor (GPCR). These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level beyond inferences that are drawn merely from static docking. Here we have shown the specific advantages of simulating the dynamic environment associated with OR-odorant interactions. We present a rigorous protocol which ranges from the creation of a computationally derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs. PMID:22563330

  6. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    PubMed

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action.

  7. A bootstrap approach to evaluating person and item fit to the Rasch model.

    PubMed

    Wolfe, Edward W

    2013-01-01

    Historically, rule-of-thumb critical values have been employed for interpreting fit statistics that depict anomalous person and item response patterns in applications of the Rasch model. Unfortunately, prior research has shown that these values are not appropriate in many contexts. This article introduces a bootstrap procedure for identifying reasonable critical values for Rasch fit statistics and compares the results of that procedure to applications of rule-of-thumb critical values for three example datasets. The results indicate that rule-of-thumb values may over- or under-identify the number of misfitting items or persons.

  8. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  9. Polynomial fitting model for phase reconstruction: interferograms with high fringe density

    NASA Astrophysics Data System (ADS)

    Téllez-Quiñones, Alejandro; Malacara-Doblado, Daniel; García-Márquez, Jorge

    2012-09-01

    A data fitting model is proposed to estimate phases from its cosine and sine. The a priori assumption is that the phases to be reconstructed should be expressed by polynomials. The cosine and sine of the phases are obtained from interferograms with high fringe density by generalized phase-shifting techniques The proposed method is employed for phase reconstrution by line integration of the phase gradient or any other phase-unwrapping technique and the fit is achieved by a least-squares minimization.

  10. Phylogenetic Tree Reconstruction Accuracy and Model Fit when Proportions of Variable Sites Change across the Tree

    PubMed Central

    Grievink, Liat Shavit; Penny, David; Hendy, Michael D.; Holland, Barbara R.

    2010-01-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction. PMID:20525636

  11. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  12. Goodness-of-fit methods for additive-risk models in tumorigenicity experiments.

    PubMed

    Ghosh, Debashis

    2003-09-01

    In tumorigenicity experiments, a complication is that the time to event is generally not observed, so that the time to tumor is subject to interval censoring. One of the goals in these studies is to properly model the effect of dose on risk. Thus, it is important to have goodness of fit procedures available for assessing the model fit. While several estimation procedures have been developed for current-status data, relatively little work has been done on model-checking techniques. In this article, we propose numerical and graphical methods for the analysis of current-status data using the additive-risk model, primarily focusing on the situation where the monitoring times are dependent. The finite-sample properties of the proposed methodology are examined through numerical studies. The methods are then illustrated with data from a tumorigenicity experiment.

  13. Erroneous Arrhenius: modified arrhenius model best explains the temperature dependence of ectotherm fitness.

    PubMed

    Knies, Jennifer L; Kingsolver, Joel G

    2010-08-01

    The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reactions over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using data sets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics are rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence.

  14. Erroneous Arrhenius: Modified Arrhenius model best explains the temperature dependence of ectotherm fitness

    PubMed Central

    Knies, Jennifer L.; Kingsolver, Joel G.

    2013-01-01

    The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reaction(s) over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using datasets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range, and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics is rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence. PMID:20528477

  15. Modelling of the toe trajectory during normal gait using circle-fit approximation.

    PubMed

    Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan

    2016-10-01

    This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.

  16. Fitting Models of the Population Consequences of Acoustic Disturbance to Data from Marine Mammal Populations

    DTIC Science & Technology

    2011-09-30

    2011 to 00-00-2011 4 . TITLE AND SUBTITLE Fitting Models of the Population Consequences of Acoustic Disturbance to Data from Marine Mammal...and 4 ) initialize each of the MCMC chains. The Gibbs sampler allows us to factor the above high dimensional model into a series of lower dimension 4 ...at NEAq, and an example time series for one animal highlights both body fat code, and entanglement episodes (Figure 4 ). Individual health is a

  17. The Role of Theoretical Atomic Physics in Astrophysical Plasma Modeling

    NASA Astrophysics Data System (ADS)

    Gorczyca, Tom

    2008-05-01

    The interpretation of cosmic spectra relies on a vast sea of atomic data which are not readily obtainable from analytic expressions or simple calculations. Since experimental determination of the multitude of atomic excitation, ionization, and recombination rates is clearly impossible, theoretical calculations are required for all transitions of all ionization stages of all elements through the iron peak elements, and to achieve the accuracy necessary for interpreting the most recently observed, high-resolution spectra, state-of-the-art atomic theoretical techniques need to be used. In this talk, I will give an overview of the latest status of the theoretical treatments of atomic processes in astrophysical plasmas, including a description of the available atomic databases. The successes of atomic theory, as assessed by benchmarking computational results with experimental measurements, where available, will be discussed as well as the present challenges facing the theoretical atomic laboratory astrophysics community.

  18. Atomic scale modelling of hexagonal structured metallic fission product alloys

    PubMed Central

    Middleburgh, S. C.; King, D. M.; Lumpkin, G. R.

    2015-01-01

    Noble metal particles in the Mo-Pd-Rh-Ru-Tc system have been simulated on the atomic scale using density functional theory techniques for the first time. The composition and behaviour of the epsilon phases are consistent with high-entropy alloys (or multi-principal component alloys)—making the epsilon phase the only hexagonally close packed high-entropy alloy currently described. Configurational entropy effects were considered to predict the stability of the alloys with increasing temperatures. The variation of Mo content was modelled to understand the change in alloy structure and behaviour with fuel burnup (Mo molar content decreases in these alloys as burnup increases). The predicted structures compare extremely well with experimentally ascertained values. Vacancy formation energies and the behaviour of extrinsic defects (including iodine and xenon) in the epsilon phase were also investigated to further understand the impact that the metallic precipitates have on fuel performance. PMID:26064629

  19. Atomic scale modelling of hexagonal structured metallic fission product alloys.

    PubMed

    Middleburgh, S C; King, D M; Lumpkin, G R

    2015-04-01

    Noble metal particles in the Mo-Pd-Rh-Ru-Tc system have been simulated on the atomic scale using density functional theory techniques for the first time. The composition and behaviour of the epsilon phases are consistent with high-entropy alloys (or multi-principal component alloys)-making the epsilon phase the only hexagonally close packed high-entropy alloy currently described. Configurational entropy effects were considered to predict the stability of the alloys with increasing temperatures. The variation of Mo content was modelled to understand the change in alloy structure and behaviour with fuel burnup (Mo molar content decreases in these alloys as burnup increases). The predicted structures compare extremely well with experimentally ascertained values. Vacancy formation energies and the behaviour of extrinsic defects (including iodine and xenon) in the epsilon phase were also investigated to further understand the impact that the metallic precipitates have on fuel performance.

  20. Simulating and Modeling Transport Through Atomically Thin Membranes

    NASA Astrophysics Data System (ADS)

    Ostrowski, Joseph; Eaves, Joel

    2014-03-01

    The world is running out of clean portable water. The efficacy of water desalination technologies using porous materials is a balance between membrane selectivity and solute throughput. These properties are just starting to be understood on the nanoscale, but in the limit of atomically thin membranes it is unclear whether one can apply typical continuous time random walk models. Depending on the size of the pore and thickness of the membrane, mass transport can range from single stochastic passage events to continuous flow describable by the usual hydrodynamic equations. We present a study of mass transport through membranes of various pore geometries using reverse nonequilibrium simulations, and analyze transport rates using stochastic master equations.