Sample records for simple linear fit

  1. Fitting program for linear regressions according to Mahon (1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trappitsch, Reto G.

    2018-01-09

    This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.

  2. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  3. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  4. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  5. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  6. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  7. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  8. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  9. Parametric resonance in the early Universe—a fitting analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  10. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  11. A discrete spectral analysis for determining quasi-linear viscoelastic properties of biological materials

    PubMed Central

    Babaei, Behzad; Abramowitch, Steven D.; Elson, Elliot L.; Thomopoulos, Stavros; Genin, Guy M.

    2015-01-01

    The viscoelastic behaviour of a biological material is central to its functioning and is an indicator of its health. The Fung quasi-linear viscoelastic (QLV) model, a standard tool for characterizing biological materials, provides excellent fits to most stress–relaxation data by imposing a simple form upon a material's temporal relaxation spectrum. However, model identification is challenging because the Fung QLV model's ‘box’-shaped relaxation spectrum, predominant in biomechanics applications, can provide an excellent fit even when it is not a reasonable representation of a material's relaxation spectrum. Here, we present a robust and simple discrete approach for identifying a material's temporal relaxation spectrum from stress–relaxation data in an unbiased way. Our ‘discrete QLV’ (DQLV) approach identifies ranges of time constants over which the Fung QLV model's typical box spectrum provides an accurate representation of a particular material's temporal relaxation spectrum, and is effective at providing a fit to this model. The DQLV spectrum also reveals when other forms or discrete time constants are more suitable than a box spectrum. After validating the approach against idealized and noisy data, we applied the methods to analyse medial collateral ligament stress–relaxation data and identify the strengths and weaknesses of an optimal Fung QLV fit. PMID:26609064

  12. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  13. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  14. Induction of Chromosomal Aberrations at Fluences of Less Than One HZE Particle per Cell Nucleus

    NASA Technical Reports Server (NTRS)

    Hada, Megumi; Chappell, Lori J.; Wang, Minli; George, Kerry A.; Cucinotta, Francis A.

    2014-01-01

    The assumption of a linear dose response used to describe the biological effects of high LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high energy and charge (HZE) nuclei. Human fibroblast and lymphocyte cells where irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with O (77 keV/ (long-s)m), Si (99 keV/ (long-s)m), Fe (175 keV/ (long-s)m), Fe (195 keV/ (long-s)m) or Fe (240 keV/ (long-s)m) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Non-linear regression models were used to evaluate possible linear and non-linear dose response models based on these data. Dose responses for simple exchanges for human fibroblast irradiated under confluent culture conditions were best fit by non-linear models motivated by a non-targeted effect (NTE). Best fits for the dose response data for human lymphocytes irradiated in blood tubes were a NTE model for O and a linear response model fit best for Si and Fe particles. Additional evidence for NTE were found in low dose experiments measuring gamma-H2AX foci, a marker of double strand breaks (DSB), and split-dose experiments with human fibroblasts. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high LET radiation at the relevant range of low doses.

  15. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  16. Evolution of complex dynamics

    NASA Astrophysics Data System (ADS)

    Wilds, Roy; Kauffman, Stuart A.; Glass, Leon

    2008-09-01

    We study the evolution of complex dynamics in a model of a genetic regulatory network. The fitness is associated with the topological entropy in a class of piecewise linear equations, and the mutations are associated with changes in the logical structure of the network. We compare hill climbing evolution, in which only mutations that increase the fitness are allowed, with neutral evolution, in which mutations that leave the fitness unchanged are allowed. The simple structure of the fitness landscape enables us to estimate analytically the rates of hill climbing and neutral evolution. In this model, allowing neutral mutations accelerates the rate of evolutionary advancement for low mutation frequencies. These results are applicable to evolution in natural and technological systems.

  17. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  18. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  19. Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.

    ERIC Educational Resources Information Center

    Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman

    1998-01-01

    Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…

  20. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  1. Accumulation of nucleopolyhedrosis virus of the European pine sawfly (Hymenoptera: Diprionidae) as a function of larval weight

    Treesearch

    M.A. Mohamed; H.C. Coppel; J.D. Podgwaite; W.D. Rollinson

    1983-01-01

    Disease-free larvae of Neodiprion sertifer (Geoffroy) treated with its nucleopolyhedrosis virus in the field and under laboratory conditions showed a high correlation between virus accumulation and body weight. Simple linear regression models were found to fit viral accumulation versus body weight under either circumstance.

  2. JMOSFET: A MOSFET parameter extractor with geometry-dependent terms

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Moore, B. T.

    1985-01-01

    The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.

  3. An Application of the H-Function to Curve-Fitting and Density Estimation.

    DTIC Science & Technology

    1983-12-01

    equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability

  4. Linearization of the bradford protein assay.

    PubMed

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  5. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  6. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  7. Tight-binding study of stacking fault energies and the Rice criterion of ductility in the fcc metals

    NASA Astrophysics Data System (ADS)

    Mehl, Michael J.; Papaconstantopoulos, Dimitrios A.; Kioussis, Nicholas; Herbranson, M.

    2000-02-01

    We have used the Naval Research Laboratory (NRL) tight-binding (TB) method to calculate the generalized stacking fault energy and the Rice ductility criterion in the fcc metals Al, Cu, Rh, Pd, Ag, Ir, Pt, Au, and Pb. The method works well for all classes of metals, i.e., simple metals, noble metals, and transition metals. We compared our results with full potential linear-muffin-tin orbital and embedded atom method (EAM) calculations, as well as experiment, and found good agreement. This is impressive, since the NRL-TB approach only fits to first-principles full-potential linearized augmented plane-wave equations of state and band structures for cubic systems. Comparable accuracy with EAM potentials can be achieved only by fitting to the stacking fault energy.

  8. An alternative to the breeder's and Lande's equations.

    PubMed

    Houchmandzadeh, Bahram

    2014-01-10

    The breeder's equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) - E(Z0) to the selection differential S = E(ZW) - E(Z0) through a simple proportionality relation R = h(2)S, where the heritability coefficient h(2) is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R' = j(2)S' can be derived, regardless of the parental genotype distribution. Here R' = E(Z1) - μ and S' = E(ZW) - μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j(2) is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande's equation Δ(z) = GP(-1)S. The linearity coefficient of the alternative equation are not changed by Gaussian selection.

  9. What Physical Fitness Component Is Most Closely Associated With Adolescents' Blood Pressure?

    PubMed

    Nunes, Heloyse E G; Alves, Carlos A S; Gonçalves, Eliane C A; Silva, Diego A S

    2017-12-01

    This study aimed to determine which of four selected physical fitness variables, would be most associated with blood pressure changes (systolic and diastolic) in a large sample of adolescents. This was a descriptive and cross-sectional, epidemiological study of 1,117 adolescents aged 14-19 years from southern Brazil. Systolic and diastolic blood pressure were measured by a digital pressure device, and the selected physical fitness variables were body composition (body mass index), flexibility (sit-and-reach test), muscle strength/resistance (manual dynamometer), and aerobic fitness (Modified Canadian Aerobic Fitness Test). Simple and multiple linear regression analyses revealed that aerobic fitness and muscle strength/resistance best explained variations in systolic blood pressure for boys (17.3% and 7.4% of variance) and girls (7.4% of variance). Aerobic fitness, body composition, and muscle strength/resistance are all important indicators of blood pressure control, but aerobic fitness was a stronger predictor of systolic blood pressure in boys and of diastolic blood pressure in both sexes.

  10. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  11. Dose Response for Chromosome Aberrations in Human Lymphocytes and Fibroblasts after Exposure to Very Low Doses of High LET Radiation

    NASA Technical Reports Server (NTRS)

    Hada, M.; George, Kerry; Cucinotta, Francis A.

    2011-01-01

    The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.

  12. An approximation of herd effect due to vaccinating children against seasonal influenza - a potential solution to the incorporation of indirect effects into static models.

    PubMed

    Van Vlaenderen, Ilse; Van Bellinghen, Laure-Anne; Meier, Genevieve; Nautrup, Barbara Poulsen

    2013-01-22

    Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses.

  13. Do Adaptive Representations of the Item-Position Effect in APM Improve Model Fit? A Simulation Study

    ERIC Educational Resources Information Center

    Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl

    2017-01-01

    The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…

  14. Using Confidence as Feedback in Multi-Sized Learning Environments

    ERIC Educational Resources Information Center

    Hench, Thomas L.

    2014-01-01

    This paper describes the use of existing confidence and performance data to provide feedback by first demonstrating the data's fit to a simple linear model. The paper continues by showing how the model's use as a benchmark provides feedback to allow current or future students to infer either the difficulty or the degree of under or over…

  15. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  16. Conditional statistical inference with multistage testing designs.

    PubMed

    Zwitser, Robert J; Maris, Gunter

    2015-03-01

    In this paper it is demonstrated how statistical inference from multistage test designs can be made based on the conditional likelihood. Special attention is given to parameter estimation, as well as the evaluation of model fit. Two reasons are provided why the fit of simple measurement models is expected to be better in adaptive designs, compared to linear designs: more parameters are available for the same number of observations; and undesirable response behavior, like slipping and guessing, might be avoided owing to a better match between item difficulty and examinee proficiency. The results are illustrated with simulated data, as well as with real data.

  17. An Alternative to the Breeder’s and Lande’s Equations

    PubMed Central

    Houchmandzadeh, Bahram

    2013-01-01

    The breeder’s equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) − E(Z0) to the selection differential S = E(ZW) − E(Z0) through a simple proportionality relation R = h2S, where the heritability coefficient h2 is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R′ = j2S′ can be derived, regardless of the parental genotype distribution. Here R′ = E(Z1) − μ and S′ = E(ZW) − μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j2 is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande’s equation Δz¯=GP−1S. The linearity coefficient of the alternative equation are not changed by Gaussian selection. PMID:24212080

  18. SU-E-T-75: A Simple Technique for Proton Beam Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgdorf, B; Kassaee, A; Garver, E

    2015-06-15

    Purpose: To develop a measurement-based technique to verify the range of proton beams for quality assurance (QA). Methods: We developed a simple technique to verify the proton beam range with in-house fabricated devices. Two separate devices were fabricated; a clear acrylic rectangular cuboid and a solid polyvinyl chloride (PVC) step wedge. For efficiency in our clinic, we used the rectangular cuboid for double scattering (DS) beams and the step wedge for pencil beam scanning (PBS) beams. These devices were added to our QA phantom to measure dose points along the distal fall-off region (between 80% and 20%) in addition tomore » dose at mid-SOBP (spread out Bragg peak) using a two-dimensional parallel plate chamber array (MatriXX™, IBA Dosimetry, Schwarzenbruck, Germany). This method relies on the fact that the slope of the distal fall-off is linear and does not vary with small changes in energy. Using a multi-layer ionization chamber (Zebra™, IBA Dosimetry), percent depth dose (PDD) curves were measured for our standard daily QA beams. The range (energy) for each beam was then varied (i.e. ±2mm and ±5mm) and additional PDD curves were measured. The distal fall-off of all PDD curves was fit to a linear equation. The distal fall-off measured dose for a particular beam was used in our linear equation to determine the beam range. Results: The linear fit of the fall-off region for the PDD curves, when varying the range by a few millimeters for a specific QA beam, yielded identical slopes. The calculated range based on measured point dose(s) in the fall-off region using the slope resulted in agreement of ±1mm of the expected beam range. Conclusion: We developed a simple technique for accurately verifying the beam range for proton therapy QA programs.« less

  19. Induction of chromosomal aberrations at fluences of less than one HZE particle per cell nucleus.

    PubMed

    Hada, Megumi; Chappell, Lori J; Wang, Minli; George, Kerry A; Cucinotta, Francis A

    2014-10-01

    The assumption of a linear dose response used to describe the biological effects of high-LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high-energy charged (HZE) nuclei. Human fibroblast and lymphocyte cells were irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with oxygen (77 keV/μm), silicon (99 keV/μm) or Fe (175 keV/μm), Fe (195 keV/μm) or Fe (240 keV/μm) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Nonlinear regression models were used to evaluate possible linear and nonlinear dose-response models based on these data. Dose responses for simple exchanges for human fibroblasts irradiated under confluent culture conditions were best fit by nonlinear models motivated by a nontargeted effect (NTE). The best fits for dose response data for human lymphocytes irradiated in blood tubes were a linear response model for all particles. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low-particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high-LET radiation at the relevant range of low doses.

  20. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  1. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  2. Phytoplankton productivity in relation to light intensity: A simple equation

    USGS Publications Warehouse

    Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.

    1987-01-01

    A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.

  3. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  4. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Abild-Pedersen, Frank

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  6. Linear Models for Systematics and Nuisances

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.

    2017-12-01

    The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.

  7. An approximation of herd effect due to vaccinating children against seasonal influenza – a potential solution to the incorporation of indirect effects into static models

    PubMed Central

    2013-01-01

    Background Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Methods Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. Results The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. Conclusions This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses. PMID:23339290

  8. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  9. Predicting location of recurrence using FDG, FLT, and Cu-ATSM PET in canine sinonasal tumors treated with radiotherapy

    NASA Astrophysics Data System (ADS)

    Bradshaw, Tyler; Fu, Rau; Bowen, Stephen; Zhu, Jun; Forrest, Lisa; Jeraj, Robert

    2015-07-01

    Dose painting relies on the ability of functional imaging to identify resistant tumor subvolumes to be targeted for additional boosting. This work assessed the ability of FDG, FLT, and Cu-ATSM PET imaging to predict the locations of residual FDG PET in canine tumors following radiotherapy. Nineteen canines with spontaneous sinonasal tumors underwent PET/CT imaging with radiotracers FDG, FLT, and Cu-ATSM prior to hypofractionated radiotherapy. Therapy consisted of 10 fractions of 4.2 Gy to the sinonasal cavity with or without an integrated boost of 0.8 Gy to the GTV. Patients had an additional FLT PET/CT scan after fraction 2, a Cu-ATSM PET/CT scan after fraction 3, and follow-up FDG PET/CT scans after radiotherapy. Following image registration, simple and multiple linear and logistic voxel regressions were performed to assess how well pre- and mid-treatment PET imaging predicted post-treatment FDG uptake. R2 and pseudo R2 were used to assess the goodness of fits. For simple linear regression models, regression coefficients for all pre- and mid-treatment PET images were significantly positive across the population (P < 0.05). However, there was large variability among patients in goodness of fits: R2 ranged from 0.00 to 0.85, with a median of 0.12. Results for logistic regression models were similar. Multiple linear regression models resulted in better fits (median R2 = 0.31), but there was still large variability between patients in R2. The R2 from regression models for different predictor variables were highly correlated across patients (R ≈ 0.8), indicating tumors that were poorly predicted with one tracer were also poorly predicted by other tracers. In conclusion, the high inter-patient variability in goodness of fits indicates that PET was able to predict locations of residual tumor in some patients, but not others. This suggests not all patients would be good candidates for dose painting based on a single biological target.

  10. Predicting location of recurrence using FDG, FLT, and Cu-ATSM PET in canine sinonasal tumors treated with radiotherapy.

    PubMed

    Bradshaw, Tyler; Fu, Rau; Bowen, Stephen; Zhu, Jun; Forrest, Lisa; Jeraj, Robert

    2015-07-07

    Dose painting relies on the ability of functional imaging to identify resistant tumor subvolumes to be targeted for additional boosting. This work assessed the ability of FDG, FLT, and Cu-ATSM PET imaging to predict the locations of residual FDG PET in canine tumors following radiotherapy. Nineteen canines with spontaneous sinonasal tumors underwent PET/CT imaging with radiotracers FDG, FLT, and Cu-ATSM prior to hypofractionated radiotherapy. Therapy consisted of 10 fractions of 4.2 Gy to the sinonasal cavity with or without an integrated boost of 0.8 Gy to the GTV. Patients had an additional FLT PET/CT scan after fraction 2, a Cu-ATSM PET/CT scan after fraction 3, and follow-up FDG PET/CT scans after radiotherapy. Following image registration, simple and multiple linear and logistic voxel regressions were performed to assess how well pre- and mid-treatment PET imaging predicted post-treatment FDG uptake. R(2) and pseudo R(2) were used to assess the goodness of fits. For simple linear regression models, regression coefficients for all pre- and mid-treatment PET images were significantly positive across the population (P < 0.05). However, there was large variability among patients in goodness of fits: R(2) ranged from 0.00 to 0.85, with a median of 0.12. Results for logistic regression models were similar. Multiple linear regression models resulted in better fits (median R(2) = 0.31), but there was still large variability between patients in R(2). The R(2) from regression models for different predictor variables were highly correlated across patients (R ≈ 0.8), indicating tumors that were poorly predicted with one tracer were also poorly predicted by other tracers. In conclusion, the high inter-patient variability in goodness of fits indicates that PET was able to predict locations of residual tumor in some patients, but not others. This suggests not all patients would be good candidates for dose painting based on a single biological target.

  11. Actinide electronic structure and atomic forces

    NASA Astrophysics Data System (ADS)

    Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.

    2000-07-01

    We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otake, M.; Schull, W.J.

    The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less

  13. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  14. Changes in Clavicle Length and Maturation in Americans: 1840-1980.

    PubMed

    Langley, Natalie R; Cridlin, Sandra

    2016-01-01

    Secular changes refer to short-term biological changes ostensibly due to environmental factors. Two well-documented secular trends in many populations are earlier age of menarche and increasing stature. This study synthesizes data on maximum clavicle length and fusion of the medial epiphysis in 1840-1980 American birth cohorts to provide a comprehensive assessment of developmental and morphological change in the clavicle. Clavicles from the Hamann-Todd Human Osteological Collection (n = 354), McKern and Stewart Korean War males (n = 341), Forensic Anthropology Data Bank (n = 1,239), and the McCormick Clavicle Collection (n = 1,137) were used in the analysis. Transition analysis was used to evaluate fusion of the medial epiphysis (scored as unfused, fusing, or fused). Several statistical treatments were used to assess fluctuations in maximum clavicle length. First, Durbin-Watson tests were used to evaluate autocorrelation, and a local regression (LOESS) was used to identify visual shifts in the regression slope. Next, piecewise regression was used to fit linear regression models before and after the estimated breakpoints. Multiple starting parameters were tested in the range determined to contain the breakpoint, and the model with the smallest mean squared error was chosen as the best fit. The parameters from the best-fit models were then used to derive the piecewise models, which were compared with the initial simple linear regression models to determine which model provided the best fit for the secular change data. The epiphyseal union data indicate a decline in the age at onset of fusion since the early twentieth century. Fusion commences approximately four years earlier in mid- to late twentieth-century birth cohorts than in late nineteenth- and early twentieth-century birth cohorts. However, fusion is completed at roughly the same age across cohorts. The most significant decline in age at onset of epiphyseal union appears to have occurred since the mid-twentieth century. LOESS plots show a breakpoint in the clavicle length data around the mid-twentieth century in both sexes, and piecewise regression models indicate a significant decrease in clavicle length in the American population after 1940. The piecewise model provides a slightly better fit than the simple linear model. Since the model standard error is not substantially different from the piecewise model, an argument could be made to select the less complex linear model. However, we chose the piecewise model to detect changes in clavicle length that are overfitted with a linear model. The decrease in maximum clavicle length is in line with a documented narrowing of the American skeletal form, as shown by analyses of cranial and facial breadth and bi-iliac breadth of the pelvis. Environmental influences on skeletal form include increases in body mass index, health improvements, improved socioeconomic status, and elimination of infectious diseases. Secular changes in bony dimensions and skeletal maturation stipulate that medical and forensic standards used to deduce information about growth, health, and biological traits must be derived from modern populations.

  15. Constraining Solar Wind Heating Processes by Kinetic Properties of Heavy Ions

    NASA Astrophysics Data System (ADS)

    Tracy, Patrick J.; Kasper, Justin C.; Raines, Jim M.; Shearer, Paul; Gilbert, Jason A.; Zurbuchen, Thomas H.

    2016-06-01

    We analyze the heavy ion components (A >4 amu ) in collisionally young solar wind plasma and show that there is a clear, stable dependence of temperature on mass, probably reflecting the conditions in the solar corona. We consider both linear and power law forms for the dependence and find that a simple linear fit of the form Ti/Tp=(1.35 ±.02 )mi/mp describes the observations twice as well as the equivalent best fit power law of the form Ti/Tp=(mi/mp) 1.07 ±.01 . Most importantly we find that current model predictions based on turbulent transport and kinetic dissipation are in agreement with observed nonthermal heating in intermediate collisional age plasma for m /q <3.5 , but are not in quantitative or qualitative agreement with the lowest collisional age results. These dependencies provide new constraints on the physics of ion heating in multispecies plasmas, along with predictions to be tested by the upcoming Solar Probe Plus and Solar Orbiter missions to the near-Sun environment.

  16. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  17. A direct method of extracting surface recombination velocity from an electron beam induced current line scan

    NASA Astrophysics Data System (ADS)

    Ong, Vincent K. S.

    1998-04-01

    The extraction of diffusion length and surface recombination velocity in a semiconductor with the use of an electron beam induced current line scan has traditionally been done by fitting the line scan into complicated theoretical equations. It was recently shown that a much simpler equation is sufficient for the extraction of diffusion length. The linearization coefficient is the only variable that is needed to be adjusted in the curve fitting process. However, complicated equations are still necessary for the extraction of surface recombination velocity. It is shown in this article that it is indeed possible to extract surface recombination velocity with a simple equation, using only one variable, the linearization coefficient. An intuitive feel for the reason behind the method was discussed. The accuracy of the method was verified with the use of three-dimensional computer simulation, and was found to be even slightly better than that of the best existing method.

  18. Comparison of Two Methods for Calculating the Frictional Properties of Articular Cartilage Using a Simple Pendulum and Intact Mouse Knee Joints

    PubMed Central

    Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.

    2009-01-01

    In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680

  19. Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction

    DOE PAGES

    Yu, Liang; Abild-Pedersen, Frank

    2016-12-14

    On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less

  20. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  1. Simple stochastic birth and death models of genome evolution: was there enough time for us to evolve?

    PubMed

    Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V

    2003-10-12

    The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.

  2. Enhancements of Bayesian Blocks; Application to Large Light Curve Databases

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2015-01-01

    Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thappily, Praveen, E-mail: pravvmon@gmail.com, E-mail: shiiuvenus@gmail.com; Shiju, K., E-mail: pravvmon@gmail.com, E-mail: shiiuvenus@gmail.com

    Green synthesis of silver nanoparticles was achieved by simple visible light irradiation using aloe barbadensis leaf extract as reducing agent. UV-Vis spectroscopic analysis was used for confirmation of the successful formation of nanoparticles. Investigated the effect of light irradiation time on the light absorption of the nanoparticles. It is observed that upto 25 minutes of light irradiation, the absorption is linearly increasing with time and after that it becomes saturated. Finally, theoretically fitted the time-absorption graph and modeled a relation between them with the help of simulation software.

  4. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  5. Numerical simulation of a relaxation test designed to fit a quasi-linear viscoelastic model for temporomandibular joint discs.

    PubMed

    Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime

    2013-02-01

    The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller areas glued are better); and (c) the variation in the thickness of the specimen. The specimen's faces should be parallel to ensure a uniaxial stress state. However, this is not possible in real specimens, and a criterion must be defined to accept the specimen in terms of the specimen's thickness variation and the deviation of the fitted constants arising from such a variation.

  6. Sex differences of anthropometric indices of obesity by age among Iranian adults in northern Iran: A predictive regression model.

    PubMed

    Hajian-Tilaki, Karimollah; Heidari, Behzad

    2015-01-01

    The biological variation of body mass index (BMI) and waist circumference (WC) with age may vary by gender. The objective of this study was to investigate the functional relationship of anthropometric measures with age and sex. The data were collected from a population-based cross-sectional study of 1800 men and 1800 women aged 20-70 years in northern Iran. The linear and quadratic pattern of age on weight, height, BMI and WC and WHR were tested statistically and the interaction effect of age and gender was also formally tested. The quadratic model (age(2)) provided a significantly better fit than simple linear model for weight, BMI and WC. BMI, WC and weight explained a greater variance using quadratic form for women compared with men (for BMI, R(2)=0.18, p<0.001 vs R(2)=0.059, p<0.001 and for WC, R(2)=0.17, p<0.001 vs R(2)=0.047, p<0.001). For height, there is an inverse linear relationship while for WHR, a positive linear association was apparent by aging, the quadratic form did not add to better fit. These findings indicate the different patterns of weight gain, fat accumulation for visceral adiposity and loss of muscle mass between men and women in the early and middle adulthood.

  7. Age- and sex-dependent regression models for predicting the live weight of West African Dwarf goat from body measurements.

    PubMed

    Sowande, O S; Oyewale, B F; Iyasere, O S

    2010-06-01

    The relationships between live weight and eight body measurements of West African Dwarf (WAD) goats were studied using 211 animals under farm condition. The animals were categorized based on age and sex. Data obtained on height at withers (HW), heart girth (HG), body length (BL), head length (HL), and length of hindquarter (LHQ) were fitted into simple linear, allometric, and multiple-regression models to predict live weight from the body measurements according to age group and sex. Results showed that live weight, HG, BL, LHQ, HL, and HW increased with the age of the animals. In multiple-regression model, HG and HL best fit the model for goat kids; HG, HW, and HL for goat aged 13-24 months; while HG, LHQ, HW, and HL best fit the model for goats aged 25-36 months. Coefficients of determination (R(2)) values for linear and allometric models for predicting the live weight of WAD goat increased with age in all the body measurements, with HG being the most satisfactory single measurement in predicting the live weight of WAD goat. Sex had significant influence on the model with R(2) values consistently higher in females except the models for LHQ and HW.

  8. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  9. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  10. The entrainment matrix of a superfluid nucleon mixture at finite temperatures

    NASA Astrophysics Data System (ADS)

    Leinson, Lev B.

    2018-06-01

    It is considered a closed system of non-linear equations for the entrainment matrix of a non-relativistic mixture of superfluid nucleons at arbitrary temperatures below the onset of neutron superfluidity, which takes into account the essential dependence of the superfluid energy gap in the nucleon spectra on the velocities of superfluid flows. It is assumed that the protons condense into the isotropic 1S0 state, and the neutrons are paired into the spin-triplet 3P2 state. It is derived an analytic solution to the non-linear equations for the entrainment matrix under temperatures just below the critical value for the neutron superfluidity onset. In general case of an arbitrary temperature of the superfluid mixture the non-linear equations are solved numerically and fitted by simple formulas convenient for a practical use with an arbitrary set of the Landau parameters.

  11. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    PubMed

    Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  12. Inference of epistatic effects in a key mitochondrial protein

    NASA Astrophysics Data System (ADS)

    Nelson, Erik D.; Grishin, Nick V.

    2018-06-01

    We use Potts model inference to predict pair epistatic effects in a key mitochondrial protein—cytochrome c oxidase subunit 2—for ray-finned fishes. We examine the effect of phylogenetic correlations on our predictions using a simple exact fitness model, and we find that, although epistatic effects are underpredicted, they maintain a roughly linear relationship to their true (model) values. After accounting for this correction, epistatic effects in the protein are still relatively weak, leading to fitness valleys of depth 2 N s ≃-5 in compensatory double mutants. Interestingly, positive epistasis is more pronounced than negative epistasis, and the strongest positive effects capture nearly all sites subject to positive selection in fishes, similar to virus proteins evolving under selection pressure in the context of drug therapy.

  13. Simulation Study on Fit Indexes in CFA Based on Data with Slightly Distorted Simple Structure

    ERIC Educational Resources Information Center

    Beauducel, Andre; Wittmann, Werner W.

    2005-01-01

    Fit indexes were compared with respect to a specific type of model misspecification. Simple structure was violated with some secondary loadings that were present in the true models that were not specified in the estimated models. The c2 test, Comparative Fit Index, Goodness-of-Fit Index, Incremental Fit Index, Nonnormed Fit Index, root mean…

  14. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  15. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  16. Power dependence of reflectivity of metallic films.

    PubMed

    Yeh, Y C; Stafsudd, O M

    1976-01-01

    The reflectivity of vacuum-deposited gold films on quartz glass substrates was studied as a function of 10.6-microm radiation power density. A simple linear model of the temperature dependence of the absorptivity of the gold film is developed. This temperature dependence is coupled with a three-dimensional heat flow analysis and fits the experimental data well. The absorptivity alpha is written as alpha(0)(1 + betaT) and the values of alpha(0) and beta are determined, respectively, as (0.88 +/- 0.01) x 10(-2) and 12 x 10(-4)/ degrees C.

  17. Second-harmonic diffraction from holographic volume grating.

    PubMed

    Nee, Tsu-Wei

    2006-10-01

    The full polarization property of holographic volume-grating enhanced second-harmonic diffraction (SHD) is investigated theoretically. The nonlinear coefficient is derived from a simple atomic model of the material. By using a simple volume-grating model, the SHD fields and Mueller matrices are first derived. The SHD phase-mismatching effect for a thick sample is analytically investigated. This theory is justified by fitting with published experimental SHD data of thin-film samples. The SHD of an existing polymethyl methacrylate (PMMA) holographic 2-mm-thick volume-grating sample is investigated. This sample has two strong coupling linear diffraction peaks and five SHD peaks. The splitting of SHD peaks is due to the phase-mismatching effect. The detector sensitivity and laser power needed to measure these peak signals are quantitatively estimated.

  18. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  19. Relationship between the frequency magnitude distribution and the visibility graph in the synthetic seismicity generated by a simple stick-slip system with asperities.

    PubMed

    Telesca, Luciano; Lovallo, Michele; Ramirez-Rojas, Alejandro; Flores-Marquez, Leticia

    2014-01-01

    By using the method of the visibility graph (VG) the synthetic seismicity generated by a simple stick-slip system with asperities is analysed. The stick-slip system mimics the interaction between tectonic plates, whose asperities are given by sandpapers of different granularity degrees. The VG properties of the seismic sequences have been put in relationship with the typical seismological parameter, the b-value of the Gutenberg-Richter law. Between the b-value of the synthetic seismicity and the slope of the least square line fitting the k-M plot (relationship between the magnitude M of each synthetic event and its connectivity degree k) a close linear relationship is found, also verified by real seismicity.

  20. Umklapp scattering as the origin of T -linear resistivity in the normal state of high- T c cuprate superconductors

    DOE PAGES

    Rice, T. Maurice; Robinson, Neil J.; Tsvelik, Alexei M.

    2017-12-11

    Here, the high-temperature normal state of the unconventional cuprate superconductors has resistivity linear in temperature T, which persists to values well beyond the Mott-Ioffe-Regel upper bound. At low temperatures, within the pseudogap phase, the resistivity is instead quadratic in T, as would be expected from Fermi liquid theory. Developing an understanding of these normal phases of the cuprates is crucial to explain the unconventional superconductivity. We present a simple explanation for this behavior, in terms of the umklapp scattering of electrons. This fits within the general picture emerging from functional renormalization group calculations that spurred the Yang-Rice-Zhang ansatz: Umklapp scatteringmore » is at the heart of the behavior in the normal phase.« less

  1. Simple, Internally Adjustable Valve

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1990-01-01

    Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.

  2. Angular Size Test on the Expansion of the Universe

    NASA Astrophysics Data System (ADS)

    López-Corredoira, Martín

    Assuming the standard cosmological model to be correct, the average linear size of the galaxies with the same luminosity is six times smaller at z = 3.2 than at z = 0; and their average angular size for a given luminosity is approximately proportional to z-1. Neither the hypothesis that galaxies which formed earlier have much higher densities nor their luminosity evolution, merger ratio, and massive outflows due to a quasar feedback mechanism are enough to justify such a strong size evolution. Also, at high redshift, the intrinsic ultraviolet surface brightness would be prohibitively high with this evolution, and the velocity dispersion much higher than observed. We explore here another possibility of overcoming this problem: considering different cosmological scenarios, which might make the observed angular sizes compatible with a weaker evolution. One of the explored models, a very simple phenomenological extrapolation of the linear Hubble law in a Euclidean static universe, fits quite well the angular size versus redshift dependence, also approximately proportional to z-1 with this cosmological model. There are no free parameters derived ad hoc, although the error bars allow a slight size/luminosity evolution. The supernova Ia Hubble diagram can also be explained in terms of this model without any ad-hoc-fitted parameter. NB: I do not argue here that the true universe is static. My intention is just to discuss which intellectual theoretical models fit better some data of the observational cosmology.

  3. Spectral embedding finds meaningful (relevant) structure in image and microarray data

    PubMed Central

    Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L

    2006-01-01

    Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359

  4. A simple approach to lifetime learning in genetic programming-based symbolic regression.

    PubMed

    Azad, Raja Muhammad Atif; Ryan, Conor

    2014-01-01

    Genetic programming (GP) coarsely models natural evolution to evolve computer programs. Unlike in nature, where individuals can often improve their fitness through lifetime experience, the fitness of GP individuals generally does not change during their lifetime, and there is usually no opportunity to pass on acquired knowledge. This paper introduces the Chameleon system to address this discrepancy and augment GP with lifetime learning by adding a simple local search that operates by tuning the internal nodes of individuals. Although not the first attempt to combine local search with GP, its simplicity means that it is easy to understand and cheap to implement. A simple cache is added which leverages the local search to reduce the tuning cost to a small fraction of the expected cost, and we provide a theoretical upper limit on the maximum tuning expense given the average tree size of the population and show that this limit grows very conservatively as the average tree size of the population increases. We show that Chameleon uses available genetic material more efficiently by exploring more actively than with standard GP, and demonstrate that not only does Chameleon outperform standard GP (on both training and test data) over a number of symbolic regression type problems, it does so by producing smaller individuals and it works harmoniously with two other well-known extensions to GP, namely, linear scaling and a diversity-promoting tournament selection method.

  5. Critical Analysis of Different Methods to Retrieve Atmosphere Humidity Profiles from GNSS Radio Occultation Observations

    NASA Astrophysics Data System (ADS)

    Vespe, Francesco; Benedetto, Catia

    2013-04-01

    The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.

  6. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  7. Perceived smoking availability differentially affects mood and reaction time.

    PubMed

    Ross, Kathryn C; Juliano, Laura M

    2015-06-01

    This between subjects study explored the relationship between smoking availability and smoking motivation and is the first study to include three smoking availability time points. This allowed for an examination of an extended period of smoking unavailability, and a test of the linearity of the relationships between smoking availability and smoking motivation measures. Ninety 3-hour abstinent smokers (mean ~15 cigarettes per day) were randomly assigned to one of three availability manipulations while being exposed to smoking stimuli (i.e., pack of cigarettes): smoke in 20 min, smoke in 3 h, or smoke in 24 h. Participants completed pre- and post-manipulation measures of urge, positive affect and negative affect, and simple reaction time. The belief that smoking would next be available in 24 h resulted in a significant decrease in positive affect and increase in negative affect relative to the 3 h and 20 min conditions. A Lack of Fit test suggested a linear relationship between smoking availability and affect. A quadratic model appeared to be a better fit for the relationship between smoking availability and simple reaction time with participants in the 24 h and 20 min conditions showing a greater slowing of reaction time relative to the 3 h condition. There were no effects of the manipulations on self-reported urge, but baseline ceiling effects were noted. Future investigations that manipulate three or more periods of time before smoking is available will help to better elucidate the nature of the relationship between smoking availability and smoking motivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Multidisciplinary design optimization for sonic boom mitigation

    NASA Astrophysics Data System (ADS)

    Ozcer, Isik A.

    Automated, parallelized, time-efficient surface definition and grid generation and flow simulation methods are developed for sharp and accurate sonic boom signal computation in three dimensions in the near and mid-field of an aircraft using Euler/Full-Potential unstructured/structured computational fluid dynamics. The full-potential mid-field sonic boom prediction code is an accurate and efficient solver featuring automated grid generation, grid adaptation and shock fitting, and parallel processing. This program quickly marches the solution using a single nonlinear equation for large distances that cannot be covered with Euler solvers due to large memory and long computational time requirements. The solver takes into account variations in temperature and pressure with altitude. The far-field signal prediction is handled using the classical linear Thomas Waveform Parameter Method where the switching altitude from the nonlinear to linear prediction is determined by convergence of the ground signal pressure impulse value. This altitude is determined as r/L ≈ 10 from the source for a simple lifting wing, and r/L ≈ 40 for a real complex aircraft. Unstructured grid adaptation and shock fitting methodology developed for the near-field analysis employs an Hessian based anisotropic grid adaptation based on error equidistribution. A special field scalar is formulated to be used in the computation of the Hessian based error metric which enhances significantly the adaptation scheme for shocks. The entire cross-flow of a complex aircraft is resolved with high fidelity using only 500,000 grid nodes after only about 10 solution/adaptation cycles. Shock fitting is accomplished using Roe's Flux-Difference Splitting scheme which is an approximate Riemann type solver and by proper alignment of the cell faces with respect to shock surfaces. Simple to complex real aircraft geometries are handled with no user-interference required making the simulation methods suitable tools for product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.

  9. The Type Ia Supernova Color-Magnitude Relation and Host Galaxy Dust: A Simple Hierarchical Bayesian Model

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.

    2017-06-01

    Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at z< 0.10. The conventional linear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.

  10. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Surface functionalized composite nanofibers for efficient removal of arsenic from aqueous solutions.

    PubMed

    Mohamed, Alaa; Osman, T A; Toprak, M S; Muhammed, M; Uheida, A

    2017-08-01

    A novel composites nanofiber was synthesized based on PAN-CNT/TiO 2 -NH 2 nanofibers using electrospinning technique followed by chemical modification of TiO 2 NPs. PAN-CNT/TiO 2 -NH 2 nanofiber were characterized by XRD, FTIR, SEM, and TEM. The effects of various experimental parameters such as initial concentration, contact time, and solution pH on As removal were investigated. The maximum adsorption capacity at pH 2 for As(III) and As(V) is 251 mg/g and 249 mg/g, respectively, which is much higher than most of the reported adsorbents. The adsorption equilibrium reached within 20 and 60 min as the initial solution concentration increased from 10 to 100 mg/L, and the data fitted well using the linear and nonlinear pseudo first and second order model. Isotherm data fitted well to the linear and nonlinear Langmuir, Freundlich, and Redlich-Peterson isotherm adsorption model. Desorption results showed that the adsorption capacity can remain up to 70% after 5 times usage. This work provides a simple and an efficient method for removing arsenic from aqueous solution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Trajectories of eGFR decline over a four year period in an Indigenous Australian population at high risk of CKD-the eGFR follow up study.

    PubMed

    Barzi, Federica; Jones, Graham R D; Hughes, Jaquelyne T; Lawton, Paul D; Hoy, Wendy; O'Dea, Kerin; Jerums, George; MacIsaac, Richard J; Cass, Alan; Maple-Brown, Louise J

    2018-03-01

    Being able to estimate kidney decline accurately is particularly important in Indigenous Australians, a population at increased risk of developing chronic kidney disease and end stage kidney disease. The aim of this analysis was to explore the trend of decline in estimated glomerular filtration rate (eGFR) over a four year period using multiple local creatinine measures, compared with estimates derived using centrally-measured enzymatic creatinine and with estimates derived using only two local measures. The eGFR study comprised a cohort of over 600 Aboriginal Australian participants recruited from over twenty sites in urban, regional and remote Australia across five strata of health, diabetes and kidney function. Trajectories of eGFR were explored on 385 participants with at least three local creatinine records using graphical methods that compared the linear trends fitted using linear mixed models with non-linear trends fitted using fractional polynomial equations. Temporal changes of local creatinine were also characterized using group-based modelling. Analyses were stratified by eGFR (<60; 60-89; 90-119 and ≥120ml/min/1.73m 2 ) and albuminuria categories (<3mg/mmol; 3-30mg/mmol; >30mg/mmol). Mean age of the participants was 48years, 64% were female and the median follow-up was 3years. Decline of eGFR was accurately estimated using simple linear regression models and locally measured creatinine was as good as centrally measured creatinine at predicting kidney decline in people with an eGFR<60 and an eGFR 60-90ml/min/1.73m 2 with albuminuria. Analyses showed that one baseline and one follow-up locally measured creatinine may be sufficient to estimate short term (up to four years) kidney function decline. The greatest yearly decline was estimated in those with eGFR 60-90 and macro-albuminuria: -6.21 (-8.20, -4.23) ml/min/1.73m 2 . Short term estimates of kidney function decline can be reliably derived using an easy to implement and simple to interpret linear mixed effect model. Locally measured creatinine did not differ to centrally measured creatinine, thus is an accurate cost-efficient and timely means to monitoring kidney function progression. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  14. Advanced Statistics for Exotic Animal Practitioners.

    PubMed

    Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G

    2017-09-01

    Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Postglacial rebound with a non-Newtonian upper mantle and a Newtonian lower mantle rheology

    NASA Technical Reports Server (NTRS)

    Gasperini, Paolo; Yuen, David A.; Sabadini, Roberto

    1992-01-01

    A composite rheology is employed consisting of both linear and nonlinear creep mechanisms which are connected by a 'transition' stress. Background stress due to geodynamical processes is included. For models with a non-Newtonian upper-mantle overlying a Newtonian lower-mantle, the temporal responses of the displacements can reproduce those of Newtonian models. The average effective viscosity profile under the ice-load at the end of deglaciation turns out to be the crucial factor governing mantle relaxation. This can explain why simple Newtonian rheology has been successful in fitting the uplift data over formerly glaciated regions.

  16. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  17. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  18. A linear acoustic model for intake wave dynamics in IC engines

    NASA Astrophysics Data System (ADS)

    Harrison, M. F.; Stanev, P. T.

    2004-01-01

    In this paper, a linear acoustic model is described that has proven useful in obtaining a better understanding of the nature of acoustic wave dynamics in the intake system of an internal combustion (IC) engine. The model described has been developed alongside a set of measurements made on a Ricardo E6 single cylinder research engine. The simplified linear acoustic model reported here produces a calculation of the pressure time-history in the port of an IC engine that agrees fairly well with measured data obtained on the engine fitted with a simple intake system. The model has proved useful in identifying the role of pipe resonance in the intake process and has led to the development of a simple hypothesis to explain the structure of the intake pressure time history: the early stages of the intake process are governed by the instantaneous values of the piston velocity and the open area under the valve. Thereafter, resonant wave action dominates the process. The depth of the early depression caused by the moving piston governs the intensity of the wave action that follows. A pressure ratio across the valve that is favourable to inflow is maintained and maximized when the open period of the valve is such to allow at least, but no more than, one complete oscillation of the pressure at its resonant frequency to occur while the valve is open.

  19. TableViewer for Herschel Data Processing

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Schulz, B.

    2006-07-01

    The TableViewer utility is a GUI tool written in Java to support interactive data processing and analysis for the Herschel Space Observatory (Pilbratt et al. 2001). The idea was inherited from a prototype written in IDL (Schulz et al. 2005). It allows to graphically view and analyze tabular data organized in columns with equal numbers of rows. It can be run either as a standalone application, where data access is restricted to FITS (FITS 1999) files only, or it can be run from the Quick Look Analysis(QLA) or Interactive Analysis(IA) command line, from where also objects are accessible. The graphic display is very versatile, allowing plots in either linear or log scales. Zooming, panning, and changing data columns is performed rapidly using a group of navigation buttons. Selecting and de-selecting of fields of data points controls the input to simple analysis tasks like building a statistics table, or generating power spectra. The binary data stored in a TableDataset^1, a Product or in FITS files can also be displayed as tabular data, where values in individual cells can be modified. TableViewer provides several processing utilities which, besides calculation of statistics either for all channels or for selected channels, and calculation of power spectra, allows to convert/repair datasets by changing the unit name of data columns, and by modifying data values in columns with a simple calculator tool. Interactively selected data can be separated out, and modified data sets can be saved to FITS files. The tool will be very helpful especially in the early phases of Herschel data analysis when a quick access to contents of data products is important. TableDataset and Product are Java classes defined in herschel.ia.dataset.

  20. Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung

    2015-09-01

    We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.

  1. A population pharmacokinetic model of valproic acid in pediatric patients with epilepsy: a non-linear pharmacokinetic model based on protein-binding saturation.

    PubMed

    Ding, Junjie; Wang, Yi; Lin, Weiwei; Wang, Changlian; Zhao, Limei; Li, Xingang; Zhao, Zhigang; Miao, Liyan; Jiao, Zheng

    2015-03-01

    Valproic acid (VPA) follows a non-linear pharmacokinetic profile in terms of protein-binding saturation. The total daily dose regarding VPA clearance is a simple power function, which may partially explain the non-linearity of the pharmacokinetic profile; however, it may be confounded by the therapeutic drug monitoring effect. The aim of this study was to develop a population pharmacokinetic model for VPA based on protein-binding saturation in pediatric patients with epilepsy. A total of 1,107 VPA serum trough concentrations at steady state were collected from 902 epileptic pediatric patients aged from 3 weeks to 14 years at three hospitals. The population pharmacokinetic model was developed using NONMEM(®) software. The ability of three candidate models (the simple power exponent model, the dose-dependent maximum effect [DDE] model, and the protein-binding model) to describe the non-linear pharmacokinetic profile of VPA was investigated, and potential covariates were screened using a stepwise approach. Bootstrap, normalized prediction distribution errors and external evaluations from two independent studies were performed to determine the stability and predictive performance of the candidate models. The age-dependent exponent model described the effects of body weight and age on the clearance well. Co-medication with carbamazepine was identified as a significant covariate. The DDE model best fitted the aim of this study, although there were no obvious differences in the predictive performances. The condition number was less than 500, and the precision of the parameter estimates was less than 30 %, indicating stability and validity of the final model. The DDE model successfully described the non-linear pharmacokinetics of VPA. Furthermore, the proposed population pharmacokinetic model of VPA can be used to design rational dosage regimens to achieve desirable serum concentrations.

  2. Silicon Drift Detector response function for PIXE spectra fitting

    NASA Astrophysics Data System (ADS)

    Calzolai, G.; Tapinassi, S.; Chiari, M.; Giannoni, M.; Nava, S.; Pazzi, G.; Lucarelli, F.

    2018-02-01

    The correct determination of the X-ray peak areas in PIXE spectra by fitting with a computer program depends crucially on accurate parameterization of the detector peak response function. In the Guelph PIXE software package, GUPIXWin, one of the most used PIXE spectra analysis code, the response of a semiconductor detector to monochromatic X-ray radiation is described by a linear combination of several analytical functions: a Gaussian profile for the X-ray line itself, and additional tail contributions (exponential tails and step functions) on the low-energy side of the X-ray line to describe incomplete charge collection effects. The literature on the spectral response of silicon X-ray detectors for PIXE applications is rather scarce, in particular data for Silicon Drift Detectors (SDD) and for a large range of X-ray energies are missing. Using a set of analytical functions, the SDD response functions were satisfactorily reproduced for the X-ray energy range 1-15 keV. The behaviour of the parameters involved in the SDD tailing functions with X-ray energy is described by simple polynomial functions, which permit an easy implementation in PIXE spectra fitting codes.

  3. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  4. Potential pitfalls when denoising resting state fMRI data using nuisance regression.

    PubMed

    Bright, Molly G; Tench, Christopher R; Murphy, Kevin

    2017-07-01

    In resting state fMRI, it is necessary to remove signal variance associated with noise sources, leaving cleaned fMRI time-series that more accurately reflect the underlying intrinsic brain fluctuations of interest. This is commonly achieved through nuisance regression, in which the fit is calculated of a noise model of head motion and physiological processes to the fMRI data in a General Linear Model, and the "cleaned" residuals of this fit are used in further analysis. We examine the statistical assumptions and requirements of the General Linear Model, and whether these are met during nuisance regression of resting state fMRI data. Using toy examples and real data we show how pre-whitening, temporal filtering and temporal shifting of regressors impact model fit. Based on our own observations, existing literature, and statistical theory, we make the following recommendations when employing nuisance regression: pre-whitening should be applied to achieve valid statistical inference of the noise model fit parameters; temporal filtering should be incorporated into the noise model to best account for changes in degrees of freedom; temporal shifting of regressors, although merited, should be achieved via optimisation and validation of a single temporal shift. We encourage all readers to make simple, practical changes to their fMRI denoising pipeline, and to regularly assess the appropriateness of the noise model used. By negotiating the potential pitfalls described in this paper, and by clearly reporting the details of nuisance regression in future manuscripts, we hope that the field will achieve more accurate and precise noise models for cleaning the resting state fMRI time-series. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Are running speeds maximized with simple-spring stance mechanics?

    PubMed

    Clark, Kenneth P; Weyand, Peter G

    2014-09-15

    Are the fastest running speeds achieved using the simple-spring stance mechanics predicted by the classic spring-mass model? We hypothesized that a passive, linear-spring model would not account for the running mechanics that maximize ground force application and speed. We tested this hypothesis by comparing patterns of ground force application across athletic specialization (competitive sprinters vs. athlete nonsprinters, n = 7 each) and running speed (top speeds vs. slower ones). Vertical ground reaction forces at 5.0 and 7.0 m/s, and individual top speeds (n = 797 total footfalls) were acquired while subjects ran on a custom, high-speed force treadmill. The goodness of fit between measured vertical force vs. time waveform patterns and the patterns predicted by the spring-mass model were assessed using the R(2) statistic (where an R(2) of 1.00 = perfect fit). As hypothesized, the force application patterns of the competitive sprinters deviated significantly more from the simple-spring pattern than those of the athlete, nonsprinters across the three test speeds (R(2) <0.85 vs. R(2) ≥ 0.91, respectively), and deviated most at top speed (R(2) = 0.78 ± 0.02). Sprinters attained faster top speeds than nonsprinters (10.4 ± 0.3 vs. 8.7 ± 0.3 m/s) by applying greater vertical forces during the first half (2.65 ± 0.05 vs. 2.21 ± 0.05 body wt), but not the second half (1.71 ± 0.04 vs. 1.73 ± 0.04 body wt) of the stance phase. We conclude that a passive, simple-spring model has limited application to sprint running performance because the swiftest runners use an asymmetrical pattern of force application to maximize ground reaction forces and attain faster speeds. Copyright © 2014 the American Physiological Society.

  6. Meteorological adjustment of yearly mean values for air pollutant concentration comparison

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Neustadter, H. E.

    1976-01-01

    Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.

  7. Non-equilibrium surface tension of the vapour-liquid interface of active Lennard-Jones particles

    NASA Astrophysics Data System (ADS)

    Paliwal, Siddharth; Prymidis, Vasileios; Filion, Laura; Dijkstra, Marjolein

    2017-08-01

    We study a three-dimensional system of self-propelled Brownian particles interacting via the Lennard-Jones potential. Using Brownian dynamics simulations in an elongated simulation box, we investigate the steady states of vapour-liquid phase coexistence of active Lennard-Jones particles with planar interfaces. We measure the normal and tangential components of the pressure tensor along the direction perpendicular to the interface and verify mechanical equilibrium of the two coexisting phases. In addition, we determine the non-equilibrium interfacial tension by integrating the difference of the normal and tangential components of the pressure tensor and show that the surface tension as a function of strength of particle attractions is well fitted by simple power laws. Finally, we measure the interfacial stiffness using capillary wave theory and the equipartition theorem and find a simple linear relation between surface tension and interfacial stiffness with a proportionality constant characterized by an effective temperature.

  8. Capacitive touch sensing : signal and image processing algorithms

    NASA Astrophysics Data System (ADS)

    Baharav, Zachi; Kakarala, Ramakrishna

    2011-03-01

    Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.

  9. Optical and biometric relationships of the isolated pig crystalline lens.

    PubMed

    Vilupuru, A S; Glasser, A

    2001-07-01

    To investigate the interrelationships between optical and biometric properties of the porcine crystalline lens, to compare these findings with similar relationships found for the human lens and to attempt to fit this data to a geometric model of the optical and biometric properties of the pig lens. Weight, focal length, spherical aberration, surface curvatures, thickness and diameters of 20 isolated pig lenses were measured and equivalent refractive index was calculated. These parameters were compared and used to geometrically model the pig lens. Linear relationships were identified between many of the lens biometric and optical properties. The existence of these relationships allowed a simple geometrical model of the pig lens to be calculated which offers predictions of the optical properties. The linear relationships found and the agreement observed between measured and modeled results suggest that the pig lens confirms to a predictable, preset developmental pattern and that the optical and biometric properties are predictably interrelated.

  10. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  11. A Kp-based model of auroral boundaries

    NASA Astrophysics Data System (ADS)

    Carbary, James F.

    2005-10-01

    The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.

  12. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  13. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  14. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  15. Measuring the intangibles: a metrics for the economic complexity of countries and products.

    PubMed

    Cristelli, Matthieu; Gabrielli, Andrea; Tacchella, Andrea; Caldarelli, Guido; Pietronero, Luciano

    2013-01-01

    We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation.

  16. Measuring the Intangibles: A Metrics for the Economic Complexity of Countries and Products

    PubMed Central

    Cristelli, Matthieu; Gabrielli, Andrea; Tacchella, Andrea; Caldarelli, Guido; Pietronero, Luciano

    2013-01-01

    We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation. PMID:23940633

  17. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    NASA Astrophysics Data System (ADS)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  18. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  19. A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies

    PubMed Central

    Nishimoto, Shinji; Gallant, Jack L.

    2012-01-01

    Area MT has been an important target for studies of motion processing. However, previous neurophysiological studies of MT have used simple stimuli that do not contain many of the motion signals that occur during natural vision. In this study we sought to determine whether views of area MT neurons developed using simple stimuli can account for MT responses under more naturalistic conditions. We recorded responses from macaque area MT neurons during stimulation with naturalistic movies. We then used a quantitative modeling framework to discover which specific mechanisms best predict neuronal responses under these challenging conditions. We find that the simplest model that accurately predicts responses of MT neurons consists of a bank of V1-like filters, each followed by a compressive nonlinearity, a divisive nonlinearity and linear pooling. Inspection of the fit models shows that the excitatory receptive fields of MT neurons tend to lie on a single plane within the three-dimensional spatiotemporal frequency domain, and suppressive receptive fields lie off this plane. However, most excitatory receptive fields form a partial ring in the plane and avoid low temporal frequencies. This receptive field organization ensures that most MT neurons are tuned for velocity but do not tend to respond to ambiguous static textures that are aligned with the direction of motion. In sum, MT responses to naturalistic movies are largely consistent with predictions based on simple stimuli. However, models fit using naturalistic stimuli reveal several novel properties of MT receptive fields that had not been shown in prior experiments. PMID:21994372

  20. Simultaneous determination of phenytoin, carbamazepine, and 10,11-carbamazepine epoxide in human plasma by high-performance liquid chromatography with ultraviolet detection.

    PubMed

    Bhatti, M M; Hanson, G D; Schultz, L

    1998-03-01

    The Bioanalytical Chemistry Department at the Madison facility of Covance Laboratories, has developed and validated a simple and sensitive method for the simultaneous determination of phenytoin (PHT), carbamazepine (CBZ) and 10,11-carbamazepine epoxide (CBZ-E) in human plasma by high-performance liquid chromatography with 10,11 dihydrocarbamazepine as the internal standard. Acetonitrile was added to plasma samples containing PHT, CBZ and CBZ-E to precipitate the plasma proteins. After centrifugation, the acetonitrile supernatant was transferred to a clean tube and evaporated under N2. The dried sample extract was reconstituted in 0.4 ml of mobile phase and injected for analysis by high-performance liquid chromatography. Separation was achieved on a Spherisorb ODS2 analytical column with a mobile phase of 18:18:70 acetonitrile:methanol:potassium phosphate buffer. Detection was at 210 nm using an ultraviolet detector. The mean retention times of CBZ-E, PHT and CBZ were 5.8, 9.9 and 11.8 min, respectively. Peak height ratios were fit to a least squares linear regression algorithm with a 1/(concentration)2 weighting. The method produces acceptable linearity, precision and accuracy to a minimum concentration of 0.050 micrograms ml-1 in human plasma. It is also simple and convenient, with no observable matrix interferences.

  1. Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  2. Health-related quality of life of Spanish children with cystic fibrosis.

    PubMed

    Groeneveld, Iris F; Sosa, Elena S; Pérez, Margarita; Fiuza-Luces, Carmen; Gonzalez-Saiz, Laura; Gallardo, Cristian; López-Mojares, Luis M; Ruiz, Jonatan R; Lucia, Alejandro

    2012-12-01

    To investigate (1) the contributions of sex, age, nutritional status- and physical-fitness-related variables on health-related quality of life (HRQOL) in Spanish children with cystic fibrosis, and (2) the agreement on HRQOL between children and their parents. In 28 children aged 6-17 years, body mass index percentile, percentage body fat, physical activity, pulmonary function, cardiorespiratory fitness, functional mobility, and dynamic muscle strength were determined using objective measures. HRQOL was measured using the revised version of the cystic fibrosis questionnaire. Simple and multiple linear regression analyses were performed to determine the variables associated with HRQOL. To assess the agreement on HRQOL between children and parents, intra-class correlation coefficients (ICCs) were calculated. Girls reported worse emotional functioning, a higher treatment burden, and more respiratory problems than boys. Greater functional mobility appeared associated with a less favourable body image and more eating disturbances. Agreement on HRQOL between children and parents was good to excellent, except for the domain of treatment burden. Sex and age were stronger predictors of HRQOL than nutritional status- or physical-fitness-related variables. Children reported a lower treatment burden than their parents perceived them to have.

  3. Hydration and conformational equilibria of simple hydrophobic and amphiphilic solutes.

    PubMed Central

    Ashbaugh, H S; Kaler, E W; Paulaitis, M E

    1998-01-01

    We consider whether the continuum model of hydration optimized to reproduce vacuum-to-water transfer free energies simultaneously describes the hydration free energy contributions to conformational equilibria of the same solutes in water. To this end, transfer and conformational free energies of idealized hydrophobic and amphiphilic solutes in water are calculated from explicit water simulations and compared to continuum model predictions. As benchmark hydrophobic solutes, we examine the hydration of linear alkanes from methane through hexane. Amphiphilic solutes were created by adding a charge of +/-1e to a terminal methyl group of butane. We find that phenomenological continuum parameters fit to transfer free energies are significantly different from those fit to conformational free energies of our model solutes. This difference is attributed to continuum model parameters that depend on solute conformation in water, and leads to effective values for the free energy/surface area coefficient and Born radii that best describe conformational equilibrium. In light of these results, we believe that continuum models of hydration optimized to fit transfer free energies do not accurately capture the balance between hydrophobic and electrostatic contributions that determines the solute conformational state in aqueous solution. PMID:9675177

  4. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    PubMed

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  6. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  7. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  8. Modified superposition: A simple time series approach to closed-loop manual controller identification

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.

    1986-01-01

    Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.

  9. Comparative analysis of linear and non-linear method of estimating the sorption isotherm parameters for malachite green onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-08-21

    The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.

  10. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  11. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rice, T. Maurice; Robinson, Neil J.; Tsvelik, Alexei M.

    Here, the high-temperature normal state of the unconventional cuprate superconductors has resistivity linear in temperature T, which persists to values well beyond the Mott-Ioffe-Regel upper bound. At low temperatures, within the pseudogap phase, the resistivity is instead quadratic in T, as would be expected from Fermi liquid theory. Developing an understanding of these normal phases of the cuprates is crucial to explain the unconventional superconductivity. We present a simple explanation for this behavior, in terms of the umklapp scattering of electrons. This fits within the general picture emerging from functional renormalization group calculations that spurred the Yang-Rice-Zhang ansatz: Umklapp scatteringmore » is at the heart of the behavior in the normal phase.« less

  14. Reconstructing the primordial spectrum of fluctuations of the universe from the observed nonlinear clustering of galaxies

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.; Matthews, Alex; Kumar, P.; Lu, Edward

    1991-01-01

    It was discovered that the nonlinear evolution of the two point correlation function in N-body experiments of galaxy clustering with Omega = 1 appears to be described to good approximation by a simple general formula. The underlying form of the formula is physically motivated, but its detailed representation is obtained empirically by fitting to N-body experiments. In this paper, the formula is presented along with an inverse formula which converts a final, nonlinear correlation function into the initial linear correlation function. The inverse formula is applied to observational data from the CfA, IRAs, and APM galaxy surveys, and the initial spectrum of fluctuations of the universe, if Omega = 1.

  15. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach.

    PubMed

    Wissmann, F; Reginatto, M; Möller, T

    2010-09-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  16. Laser welding on trough panel: 3D body part

    NASA Astrophysics Data System (ADS)

    Shirai, Masato; Hisano, Hirohiko

    2003-03-01

    Laser welding for automotive bodies has been introduced mainly by European car manufacturers since more than 10 years ago. Their purposes of laser welding introduction were mainly vehicle performance improvement and lightweight. And laser welding was applied to limited portion where shapes of panels are simple and easy to fit welded flanges. Toyota also has introduced laser welding onto 3 dimensional parts named trough panel since 1999. Our purpose of the introduction was common use of equipment. Trough panel has a complex shape and different shapes in each car type. In order to realize common use of welding equipment, we introduced parts locating equipment which had unique, small & simple jigs fo each car type and NC (Numerical Controlled) locators and air-cooled small laser head developed by ourselves to the trough welding process. Laser welding replaced spot welding and was applied linearly like stitches. Length of laser welding was determined according to comparison with statistic tensile strength and fatigue strength of spot welding.

  17. Kepler Observations of Rapid Optical Variability in the BL Lac Object W2r192+42

    NASA Technical Reports Server (NTRS)

    R.Edelson; Mushotzky, R.; Vaughn, S.; Scargle, J.; Gandhi, P.; Malkan, M.; Baumgartner, W.

    2013-01-01

    We present the first Kepler monitoring of a strongly variable BL Lac, W2R1926+42. The light curve covers 181 days with approx. 0.2% errors, 30 minute sampling and >90% duty cycle, showing numerous delta-I/I > 25% flares over timescales as short as a day. The flux distribution is highly skewed and non-Gaussian. The variability shows a strong rms-flux correlation with the clearest evidence to date for non-linearity in this relation. We introduce a method to measure periodograms from the discrete autocorrelation function, an approach that may be well-suited to a wide range of Kepler data. The periodogram is not consistent with a simple power-law, but shows a flattening at frequencies below 7x10(exp -5) Hz. Simple models of the power spectrum, such as a broken power law, do not produce acceptable fits, indicating that the Kepler blazar light curve requires more sophisticated mathematical and physical descriptions than currently in use.

  18. Parametrization of free ion levels of four isoelectronic 4f2 systems: Insights into configuration interaction parameters

    NASA Astrophysics Data System (ADS)

    Yeung, Yau Yuen; Tanner, Peter A.

    2013-12-01

    The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.

  19. [Determination of morroniside concentration in beagle plasma and its pharmacokinetics by high performance liquid chromatography-tandem mass spectrometry].

    PubMed

    Xiong, Shan; Li, Jinglai; Zhu, Xiuqing; Wang, Xiaoying; Lü, Guiyuan; Zhang, Zhenqing

    2014-03-01

    A sensitive, simple and specific high performance liquid chromatography-electrospray ionization tandem mass spectrometry (LC-MS/MS) method was developed for the determination of morroniside in the plasma of beagles administered via intragastric (ig) doses of morroniside. The method employed paeoniflorin as the internal standard and extracted by simple protein precipitation. The separation was achieved using an Inertsil ODS-SP column (50 mm x 2.1 mm, 5 microm) with mobile phases of 1 mmol/L sodium formate aqueous solution and acetonitrile (gradient elution) at a flow rate of 0.4 mL/min. The detection was accomplished by a mass spectrometer using multiple reaction monitoring (MRM) in positive mode. Pharmacokinetic parameters were fitted by software DAS 2.0. The methodological study showed a good linear relationship of 2-5 000 microg/L (r = 0.996 6) with a sensitivity of 2 microg/L as the limit of quantification. The precision, accuracy, mean recoveries and the matrix effects were satisfied with the requirements of biological sample measurement. The method described above was successfully applied to the pharmacokinetic study of morroniside in the beagle plasma samples. The area under the plasma concentration-time curves (AUC(0-infinity)) of morroniside after single ig administration doses of 5, 15 and 45 mg/kg were (1 631.20 +/- 238.50), (3 984.05 +/- 750.38) and (10 397.64 +/- 3 156.34) microg/L x h. The relationship between dose and AUC showed a good linearity. The pharmacokinetic property of morroniside was proposed to be linear pharmacokinetics.

  20. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  1. Challenge from the simple: some caveats in linearization of the Boyle-van't Hoff and Arrhenius plots.

    PubMed

    Katkov, Igor I

    2008-10-01

    Some aspects of proper linearization of the Boyle-van't Hoff (BVH) relationship for calculation of the osmotically inactive volume v(b), and Arrhenius plot (AP) for the activation energy E(a) are discussed. It is shown that the commonly used determination of the slope and the intercept (v(b)), which are presumed to be independent from each other, is invalid if the initial intracellular molality m(0) is known. Instead, the linear regression with only one independent parameter (v(b)) or the Least Square Method (LSM) with v(b) as the only fitting LSM parameter must be applied. The slope can then be calculated from the BVH relationship as the function of v(b). In case of unknown m(0) (for example, if cells are preloaded with trehalose, or electroporation caused ion leakage, etc.), it is considered as the second independent statistical parameter to be found. In this (and only) scenario, all three methods give the same results for v(b) and m(0). AP can be linearized only for water hydraulic conductivity (L(p)) and solute mobility (omega(s)) while water and solute permeabilities P(w) identical withL(p)RT and P(s) identical withomega(s)RT cannot be linearized because they have pre-exponential factor (RT) that depends on the temperature T.

  2. From Brown-Peterson to continual distractor via operation span: A SIMPLE account of complex span.

    PubMed

    Neath, Ian; VanWormer, Lisa A; Bireta, Tamra J; Surprenant, Aimée M

    2014-09-01

    Three memory tasks-Brown-Peterson, complex span, and continual distractor-all alternate presentation of a to-be-remembered item and a distractor activity, but each task is associated with a different memory system, short-term memory, working memory, and long-term memory, respectively. SIMPLE, a relative local distinctiveness model, has previously been fit to data from both the Brown-Peterson and continual distractor tasks; here we use the same version of the model to fit data from a complex span task. Despite the many differences between the tasks, including unpredictable list length, SIMPLE fit the data well. Because SIMPLE posits a single memory system, these results constitute yet another demonstration that performance on tasks originally thought to tap different memory systems can be explained without invoking multiple memory systems.

  3. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  4. Determination of the Interaction Position of Gamma Photons in Monolithic Scintillators Using Neural Network Fitting

    NASA Astrophysics Data System (ADS)

    Conde, P.; Iborra, A.; González, A. J.; Hernández, L.; Bellido, P.; Moliner, L.; Rigla, J. P.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Seimetz, M.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2016-02-01

    In Positron Emission Tomography (PET) detectors based on monolithic scintillators, the photon interaction position needs to be estimated from the light distribution (LD) on the photodetector pixels. Due to the finite size of the scintillator volume, the symmetry of the LD is truncated everywhere except for the crystal center. This effect produces a poor estimation of the interaction positions towards the edges, an especially critical situation when linear algorithms, such as Center of Gravity (CoG), are used. When all the crystal faces are painted black, except the one in contact with the photodetector, the LD can be assumed to behave as the inverse square law, providing a simple theoretical model. Using this LD model, the interaction coordinates can be determined by means of fitting each event to a theoretical distribution. In that sense, the use of neural networks (NNs) has been shown to be an effective alternative to more traditional fitting techniques as nonlinear least squares (LS). The multilayer perceptron is one type of NN which can model non-linear functions well and can be trained to accurately generalize when presented with new data. In this work we have shown the capability of NNs to approximate the LD and provide the interaction coordinates of γ-photons with two different photodetector setups. One experimental setup was based on analog Silicon Photomultipliers (SiPMs) and a charge division diode network, whereas the second setup was based on digital SiPMs (dSiPMs). In both experiments NNs minimized border effects. Average spatial resolutions of 1.9 ±0.2 mm and 1.7 ±0.2 mm for the entire crystal surface were obtained for the analog and dSiPMs approaches, respectively.

  5. Spatially resolved regression analysis of pre-treatment FDG, FLT and Cu-ATSM PET from post-treatment FDG PET: an exploratory study

    PubMed Central

    Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert

    2012-01-01

    Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748

  6. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  7. Health/Fitness Instructor's Handbook.

    ERIC Educational Resources Information Center

    Howley, Edward T.; Franks, B. Don

    This book identifies the components of physical fitness that are related to positive health as distinct from the simple performance of specific motor tasks. The positive health concept is expanded to further clarify the relationship of physical fitness to total fitness. The disciplinary knowledge base that is essential for fitness professionals is…

  8. Fitting and forecasting coupled dark energy in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, Santiago; Amendola, Luca; Pettorino, Valeria

    2016-01-01

    We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less

  9. Determination of time zero from a charged particle detector

    DOEpatents

    Green, Jesse Andrew [Los Alamos, NM

    2011-03-15

    A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.

  10. Constraints of GRACE on the Ice Model and Mantle Rheology in Glacial Isostatic Adjustment Modeling in North-America

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.

    2009-05-01

    GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice models for composite rheology as it increases geoid rates and improves sea level fit at some sites.

  11. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  12. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  13. Non-Target Effect for Chromosome Aberrations in Human Lymphocytes and Fibroblasts After Exposure to Very Low Doses of High LET Radiation

    NASA Technical Reports Server (NTRS)

    Hada, Megumi; George, Kerry A.; Cucinotta, F. A.

    2011-01-01

    The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivor with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (.01 - 0.2 Gy) of 170 MeV/u Si-28-ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving >2 breaks in 2 or more chromosomes). The curves for doses above 0.1 Gy were more than one ion traverses a cell showed linear dose responses. However, for doses less than 0.1 Gy, Si-28-ions showed no dose response, suggesting a non-targeted effect when less than one ion traversal occurs. Additional findings for Fe-56 will be discussed.

  14. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    PubMed

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Histological Grading of Hepatocellular Carcinomas with Intravoxel Incoherent Motion Diffusion-weighted Imaging: Inconsistent Results Depending on the Fitting Method.

    PubMed

    Ichikawa, Shintaro; Motosugi, Utaroh; Hernando, Diego; Morisaka, Hiroyuki; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi

    2018-04-10

    To compare the abilities of three intravoxel incoherent motion (IVIM) imaging approximation methods to discriminate the histological grade of hepatocellular carcinomas (HCCs). Fifty-eight patients (60 HCCs) underwent IVIM imaging with 11 b-values (0-1000 s/mm 2 ). Slow (D) and fast diffusion coefficients (D * ) and the perfusion fraction (f) were calculated for the HCCs using the mean signal intensities in regions of interest drawn by two radiologists. Three approximation methods were used. First, all three parameters were obtained simultaneously using non-linear fitting (method A). Second, D was obtained using linear fitting (b = 500 and 1000), followed by non-linear fitting for D * and f (method B). Third, D was obtained by linear fitting, f was obtained using the regression line intersection and signals at b = 0, and non-linear fitting was used for D * (method C). A receiver operating characteristic analysis was performed to reveal the abilities of these methods to distinguish poorly-differentiated from well-to-moderately-differentiated HCCs. Inter-reader agreements were assessed using intraclass correlation coefficients (ICCs). The measurements of D, D * , and f in methods B and C (Az-value, 0.658-0.881) had better discrimination abilities than did those in method A (Az-value, 0.527-0.607). The ICCs of D and f were good to excellent (0.639-0.835) with all methods. The ICCs of D * were moderate with methods B (0.580) and C (0.463) and good with method A (0.705). The IVIM parameters may vary depending on the fitting methods, and therefore, further technical refinement may be needed.

  16. The Radial Variation of the Solar Wind Temperature-Speed Relationship

    NASA Astrophysics Data System (ADS)

    Elliott, H. A.; McComas, D. J.

    2010-12-01

    Generally, the solar wind temperature (T) and speed (V) are well correlated except in Interplanetary Coronal Mass Ejections where this correlation breaks down. We have shown that at 1 AU the speed-temperature relationship is often well represented by a linear fit for a speed range spanning both the slow and fast wind. By examining all of the ACE and OMNI measurements, we found that when coronal holes are large the fast wind can have a different T-V relationship than the slow wind. The best example of this was in 2003 when there was a very large and long-lived outward polarity coronal hole at low latitudes. The long-lived nature of the hole made it possible to clearly distinguish that large holes can have a different T-V relationship. We found it to be rare that holes are large enough and last long enough to have enough data points to clearly demonstrate this effect. In this study we compare the 2003 coronal hole observations from ACE with the Ulysses polar coronal hole measurements. In an even earlier ACE study we found that both the compressions and rarefactions curves are linear, but the compression curve is shifted to higher temperatures. In this presentation we use Helios, Ulysses, and ACE measurements to examine how the T-V relationship varies with distance. The dynamic evolution of the solar wind parameters is revealed when we first separate compressions and rarefactions and then determine the radial profiles of the solar wind parameters. We find that T-V relationship varies with distance and in particular beyond 3 AU the differences between the compressions and rarefactions are quite important and at such distances a simple linear fit does not represent the T-V distribution very well.

  17. The clustering of QSOs and the dark matter halos that host them

    NASA Astrophysics Data System (ADS)

    Zhao, Dong-Yao; Yan, Chang-Shuo; Lu, Youjun

    2013-10-01

    The spatial clustering of QSOs is an important measurable quantity which can be used to infer the properties of dark matter halos that host them. We construct a simple QSO model to explain the linear bias of QSOs measured by recent observations and explore the properties of dark matter halos that host a QSO. We assume that major mergers of dark matter halos can lead to the triggering of QSO phenomena, and the evolution of luminosity for a QSO generally shows two accretion phases, i.e., initially having a constant Eddington ratio due to the self-regulation of the accretion process when supply is sufficient, and then declining in rate with time as a power law due to either diminished supply or long term disk evolution. Using a Markov Chain Monte Carlo method, the model parameters are constrained by fitting the observationally determined QSO luminosity functions (LFs) in the hard X-ray and in the optical band simultaneously. Adopting the model parameters that best fit the QSO LFs, the linear bias of QSOs can be predicted and then compared with the observational measurements by accounting for various selection effects in different QSO surveys. We find that the latest measurements of the linear bias of QSOs from both the SDSS and BOSS QSO surveys can be well reproduced. The typical mass of SDSS QSOs at redshift 1.5 < z < 4.5 is ~ (3 - 6) × 1012 h-1 Msolar and the typical mass of BOSS QSOs at z ~ 2.4 is ~ 2 × 1012 h-1 Msolar. For relatively faint QSOs, the mass distribution of their host dark matter halos is wider than that of bright QSOs because faint QSOs can be hosted in both big halos and smaller halos, but bright QSOs are only hosted in big halos, which is part of the reason for the predicted weak dependence of the linear biases on the QSO luminosity.

  18. Repair-dependent cell radiation survival and transformation: an integrated theory.

    PubMed

    Sutherland, John C

    2014-09-07

    The repair-dependent model of cell radiation survival is extended to include radiation-induced transformations. The probability of transformation is presumed to scale with the number of potentially lethal damages that are repaired in a surviving cell or the interactions of such damages. The theory predicts that at doses corresponding to high survival, the transformation frequency is the sum of simple polynomial functions of dose; linear, quadratic, etc, essentially as described in widely used linear-quadratic expressions. At high doses, corresponding to low survival, the ratio of transformed to surviving cells asymptotically approaches an upper limit. The low dose fundamental- and high dose plateau domains are separated by a downwardly concave transition region. Published transformation data for mammalian cells show the high-dose plateaus predicted by the repair-dependent model for both ultraviolet and ionizing radiation. For the neoplastic transformation experiments that were analyzed, the data can be fit with only the repair-dependent quadratic function. At low doses, the transformation frequency is strictly quadratic, but becomes sigmodial over a wider range of doses. Inclusion of data from the transition region in a traditional linear-quadratic analysis of neoplastic transformation frequency data can exaggerate the magnitude of, or create the appearance of, a linear component. Quantitative analysis of survival and transformation data shows good agreement for ultraviolet radiation; the shapes of the transformation components can be predicted from survival data. For ionizing radiations, both neutrons and x-rays, survival data overestimate the transforming ability for low to moderate doses. The presumed cause of this difference is that, unlike UV photons, a single x-ray or neutron may generate more than one lethal damage in a cell, so the distribution of such damages in the population is not accurately described by Poisson statistics. However, the complete sigmodial dose-response data for neoplastic transformations can be fit using the repair-dependent functions with all parameters determined only from transformation frequency data.

  19. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    PubMed

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  20. Five ab initio potential energy and dipole moment surfaces for hydrated NaCl and NaF. I. Two-body interactions.

    PubMed

    Wang, Yimin; Bowman, Joel M; Kamarchik, Eugene

    2016-03-21

    We report full-dimensional, ab initio-based potentials and dipole moment surfaces for NaCl, NaF, Na(+)H2O, F(-)H2O, and Cl(-)H2O. The NaCl and NaF potentials are diabatic ones that dissociate to ions. These are obtained using spline fits to CCSD(T)/aug-cc-pV5Z energies. In addition, non-linear least square fits using the Born-Mayer-Huggins potential are presented, providing accurate parameters based strictly on the current ab initio energies. The long-range behavior of the NaCl and NaF potentials is shown to go, as expected, accurately to the point-charge Coulomb interaction. The three ion-H2O potentials are permutationally invariant fits to roughly 20,000 coupled cluster CCSD(T) energies (awCVTZ basis for Na(+) and aVTZ basis for Cl(-) and F(-)), over a large range of distances and H2O intramolecular configurations. These potentials are switched accurately in the long range to the analytical ion-dipole interactions, to improve computational efficiency. Dipole moment surfaces are fits to MP2 data; for the ion-ion cases, these are well described in the intermediate- and long-range by the simple point-charge expression. The performance of these new fits is examined by direct comparison to additional ab initio energies and dipole moments along various cuts. Equilibrium structures, harmonic frequencies, and electronic dissociation energies are also reported and compared to direct ab initio results. These indicate the high fidelity of the new PESs.

  1. Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siva, Shankar, E-mail: shankar.siva@petermac.org; Sir Peter MacCallum Department of Oncology, University of Melbourne, Parkville; Hardcastle, Nicholas

    2015-10-01

    Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation andmore » perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET/CT imaging. These findings may inform future studies of functional lung avoidance using V/Q PET/CT.« less

  2. Superpixel Based Factor Analysis and Target Transformation Method for Martian Minerals Detection

    NASA Astrophysics Data System (ADS)

    Wu, X.; Zhang, X.; Lin, H.

    2018-04-01

    The Factor analysis and target transformation (FATT) is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES) and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM) hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC) algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.

  3. Performance on a work-simulating firefighter test versus approved laboratory tests for firefighters and applicants.

    PubMed

    von Heimburg, Erna; Medbø, Jon Ingulf; Sandsund, Mariann; Reinertsen, Randi Eidsmo

    2013-01-01

    Firefighters must meet minimum physical demands. The Norwegian Labour Inspection Authority (NLIA) has approved a standardised treadmill walking test and 3 simple strength tests for smoke divers. The results of the Trondheim test were compared with those of the NLIA tests taking into account possible effects of age, experience level and gender. Four groups of participants took part in the tests: 19 young experienced firefighters, 24 senior male firefighters and inexperienced applicants, 12 male and 8 female. Oxygen uptake (VO2) at exhaustion rose linearly by the duration of the treadmill test. Time spent on the Trondheim test was closely related to performance time and peak VO2 on the treadmill test. Senior experienced firefighters did not perform better than equally fit young applicants. However, female applicants performed poorer on the Trondheim test than on the treadmill test. Performance on the Trondheim test was not closely related to muscle strength beyond a minimum. CONCLUSION. Firefighters completing the Trondheim test in under 19 min fit the requirements of the NLIA treadmill test. The Trondheim test can be used as an alternative to the NLIA tests for testing aerobic fitness but not for muscular strength. Women's result of the Trondheim test were poorer than the results of the NLIA treadmill test, probably because of their lower body mass.

  4. The Interrelationship between Promoter Strength, Gene Expression, and Growth Rate

    PubMed Central

    Klesmith, Justin R.; Detwiler, Emily E.; Tomek, Kyle J.; Whitehead, Timothy A.

    2014-01-01

    In exponentially growing bacteria, expression of heterologous protein impedes cellular growth rates. Quantitative understanding of the relationship between expression and growth rate will advance our ability to forward engineer bacteria, important for metabolic engineering and synthetic biology applications. Recently, a work described a scaling model based on optimal allocation of ribosomes for protein translation. This model quantitatively predicts a linear relationship between microbial growth rate and heterologous protein expression with no free parameters. With the aim of validating this model, we have rigorously quantified the fitness cost of gene expression by using a library of synthetic constitutive promoters to drive expression of two separate proteins (eGFP and amiE) in E. coli in different strains and growth media. In all cases, we demonstrate that the fitness cost is consistent with the previous findings. We expand upon the previous theory by introducing a simple promoter activity model to quantitatively predict how basal promoter strength relates to growth rate and protein expression. We then estimate the amount of protein expression needed to support high flux through a heterologous metabolic pathway and predict the sizable fitness cost associated with enzyme production. This work has broad implications across applied biological sciences because it allows for prediction of the interplay between promoter strength, protein expression, and the resulting cost to microbial growth rates. PMID:25286161

  5. Inference on periodicity of circadian time series.

    PubMed

    Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A

    2013-09-01

    Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.

  6. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  7. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  8. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    ERIC Educational Resources Information Center

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  9. Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.

    DTIC Science & Technology

    1980-10-01

    IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc

  10. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  11. A Simple Formula to Calculate Shallow-Water Transmission Loss by Means of a Least-Squares Surface Fit Technique.

    DTIC Science & Technology

    1980-09-01

    HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the

  12. Synchrotron speciation data for zero-valent iron nanoparticles

    EPA Pesticide Factsheets

    This data set encompasses a complete analysis of synchrotron speciation data for 5 iron nanoparticle samples (P1, P2, P3, S1, S2, and metallic iron) to include linear combination fitting results (Table 6 and Figure 9) and ab-initio extended x-ray absorption fine structure spectroscopy fitting (Figure 10 and Table 7).Table 6: Linear combination fitting of the XAS data for the 5 commercial nZVI/ZVI products tested. Species proportions are presented as percentages. Goodness of fit is indicated by the chi^2 value.Figure 9: Normalised Fe K-edge k3-weighted EXAFS of the 5 commercial nZVI/ZVIproducts tested. Dotted lines show the best 4-component linear combination fit ofreference spectra.Figure 10: Fourier transformed radial distribution functions (RDFs) of the five samplesand an iron metal foil. The black lines in Fig. 10 represent the sample data and the reddotted curves represent the non-linear fitting results of the EXAFS data.Table 7: Coordination parameters of Fe in the samples.This dataset is associated with the following publication:Chekli, L., B. Bayatsarmadi, R. Sekine, B. Sarkar, A. Maoz Shen, K. Scheckel , W. Skinner, R. Naidu, H. Shon, E. Lombi, and E. Donner. Analytical Characterisation of Nanoscale Zero-Valent Iron: A Methodological Review. Richard P. Baldwin ANALYTICA CHIMICA ACTA. Elsevier Science Ltd, New York, NY, USA, 903: 13-35, (2016).

  13. On spurious detection of linear response and misuse of the fluctuation-dissipation theorem in finite time series

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen

    2016-09-01

    Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.

  14. The electrical response of turtle cones to flashes and steps of light.

    PubMed

    Baylor, D A; Hodgkin, A L; Lamb, T D

    1974-11-01

    1. The linear response of turtle cones to weak flashes or steps of light was usually well fitted by equations based on a chain of six or seven reactions with time constants varying over about a 6-fold range.2. The temperature coefficient (Q(10)) of the reciprocal of the time to peak of the response to a flash was 1.8 (15-25 degrees C), corresponding to an activation energy of 10 kcal/mole.3. Electrical measurements with one internal electrode and a balancing circuit gave the following results on red-sensitive cones of high resistance: resistance across cell surface in dark 50-170 MOmega; time constant in dark 4-6.5 msec. The effect of a bright light was to increase the resistance and time constant by 10-30%.4. If the cell time constant, resting potential and maximum hyperpolarization are known, the fraction of ionic channels blocked by light at any instant can be calculated from the hyperpolarization and its rate of change. At times less than 50 msec the shape of this relation is consistent with the idea that the concentration of a blocking molecule which varies linearly with light intensity is in equilibrium with the fraction of ionic channels blocked.5. The rising phase of the response to flashes and steps of light covering a 10(5)-fold range of intensities is well fitted by a theory in which the essential assumptions are that (i) light starts a linear chain of reactions leading to the production of a substance which blocks ionic channels in the outer segment, (ii) an equilibrium between the blocking molecules and unblocked channels is established rapidly, and (iii) the electrical properties of the cell can be represented by a simple circuit with a time constant in the dark of about 6 msec.6. Deviations from the simple theory which occur after 50 msec are attributed partly to a time-dependent desensitization mechanism and partly to a change in saturation potential resulting from a voltage-dependent change in conductance.7. The existence of several components in the relaxation of the potential to its resting level can be explained by supposing that the ;substance' which blocks light sensitive ionic channels is inactivated in a series of steps.

  15. Comparison of linear and non-linear method in estimating the sorption isotherm parameters for safranin onto activated carbon.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-08-31

    Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  16. Hydraulic geometry of the Platte River in south-central Nebraska

    USGS Publications Warehouse

    Eschner, T.R.

    1982-01-01

    At-a-station hydraulic-geometry of the Platte River in south-central Nebraska is complex. The range of exponents of simple power-function relations is large, both between different reaches of the river, and among different sections within a given reach. The at-a-station exponents plot in several fields of the b-f-m diagram, suggesting that morphologic and hydraulic changes with increasing discharge vary considerably. Systematic changes in the plotting positions of the exponents with time indicate that in general, the width exponent has decreased, although trends are not readily apparent in the other exponents. Plots of the hydraulic-geometry relations indicate that simple power functions are not the proper model in all instances. For these sections, breaks in the slopes of the hydraulic geometry relations serve to partition the data sets. Power functions fit separately to the partitioned data described the width-, depth-, and velocity-discharge relations more accurately than did a single power function. Plotting positions of the exponents from hydraulic geometry relations of partitioned data sets on b-f-m diagrams indicate that much of the apparent variations of plotting positions of single power functions results because the single power functions compromise both subsets of partitioned data. For several sections, the shape of the channel primarily accounts for the better fit of two-power functions to partitioned data than a single power function over the entire range of data. These non-log linear relations may have significance for channel maintenance. (USGS)

  17. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  18. Probing primordial features with future galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballardini, M.; Fedeli, C.; Moscardini, L.

    2016-10-01

    We study the capability of future measurements of the galaxy clustering power spectrum to probe departures from a power-law spectrum for primordial fluctuations. On considering the information from the galaxy clustering power spectrum up to quasi-linear scales, i.e. k < 0.1 h Mpc{sup −1}, we present forecasts for DESI, Euclid and SPHEREx in combination with CMB measurements. As examples of departures in the primordial power spectrum from a simple power-law, we consider four Planck 2015 best-fits motivated by inflationary models with different breaking of the slow-roll approximation. At present, these four representative models provide an improved fit to CMB temperaturemore » anisotropies, although not at statistical significant level. As for other extensions in the matter content of the simplest ΛCDM model, the complementarity of the information in the resulting matter power spectrum expected from these galaxy surveys and in the primordial power spectrum from CMB anisotropies can be effective in constraining cosmological models. We find that the three galaxy surveys can add significant information to CMB to better constrain the extra parameters of the four models considered.« less

  19. Compositional Models of Glass/Melt Properties and their Use for Glass Formulation

    DOE PAGES

    Vienna, John D.; USA, Richland Washington

    2014-12-18

    Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less

  20. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  1. Laplacian versus topography in the solution of the linear gravimetric boundary value problem by means of successive approximations

    NASA Astrophysics Data System (ADS)

    Holota, Petr; Nesvadba, Otakar

    2017-04-01

    The aim of this paper is to discuss the solution of the linearized gravimetric boundary value problem by means of the method of successive approximations. We start with the relation between the geometry of the solution domain and the structure of Laplace's operator. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. Laplace's operator has a relatively simple structure in terms of ellipsoidal coordinates which are frequently used in geodesy. However, the physical surface of the Earth substantially differs from an oblate ellipsoid of revolution, even if it is optimally fitted. Therefore, an alternative is discussed. A system of general curvilinear coordinates such that the physical surface of the Earth is imbedded in the family of coordinate surfaces is used. Clearly, the structure of Laplace's operator is more complicated in this case. It was deduced by means of tensor calculus and in a sense it represents the topography of the physical surface of the Earth. Nevertheless, the construction of the respective Green's function is more simple, if the solution domain is transformed. This enables the use of the classical Green's function method together with the method of successive approximations for the solution of the linear gravimetric boundary value problem expressed in terms of new coordinates. The structure of iteration steps is analyzed and where useful also modified by means of the integration by parts. Comparison with other methods is discussed.

  2. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  3. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    PubMed

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Experimental Investigation of Hysteretic Dynamic Capillarity Effect in Unsaturated Flow

    PubMed Central

    Zhuang, Luwen; Qin, Chao‐Zhong; de Waal, Arjen

    2017-01-01

    Abstract The difference between average pressures of two immiscible fluids is commonly assumed to be the same as macroscopic capillary pressure, which is considered to be a function of saturation only. However, under transient conditions, a dependence of this pressure difference on the time rate of saturation change has been observed by many researchers. This is commonly referred to as dynamic capillarity effect. As a first‐order approximation, the dynamic term is assumed to be linearly dependent on the time rate of change of saturation, through a material coefficient denoted by τ. In this study, a series of laboratory experiments were carried out to quantify the dynamic capillarity effect in an unsaturated sandy soil. Primary, main, and scanning drainage experiments, under both static and dynamic conditions, were performed on a sandy soil in a small cell. The value of the dynamic capillarity coefficient τ was calculated from the air‐water pressure differences and average saturation values during static and dynamic drainage experiments. We found a dependence of τ on saturation, which showed a similar trend for all drainage conditions. However, at any given saturation, the value of τ for primary drainage was larger than the value for main drainage and that was in turn larger than the value for scanning drainage. Each data set was fit a simple log‐linear equation, with different values of fitting parameters. This nonuniqueness of the relationship between τ and saturation and possible causes is discussed. PMID:29398729

  5. Experimental Investigation of Hysteretic Dynamic Capillarity Effect in Unsaturated Flow

    NASA Astrophysics Data System (ADS)

    Zhuang, Luwen; Hassanizadeh, S. Majid; Qin, Chao-Zhong; de Waal, Arjen

    2017-11-01

    The difference between average pressures of two immiscible fluids is commonly assumed to be the same as macroscopic capillary pressure, which is considered to be a function of saturation only. However, under transient conditions, a dependence of this pressure difference on the time rate of saturation change has been observed by many researchers. This is commonly referred to as dynamic capillarity effect. As a first-order approximation, the dynamic term is assumed to be linearly dependent on the time rate of change of saturation, through a material coefficient denoted by τ. In this study, a series of laboratory experiments were carried out to quantify the dynamic capillarity effect in an unsaturated sandy soil. Primary, main, and scanning drainage experiments, under both static and dynamic conditions, were performed on a sandy soil in a small cell. The value of the dynamic capillarity coefficient τ was calculated from the air-water pressure differences and average saturation values during static and dynamic drainage experiments. We found a dependence of τ on saturation, which showed a similar trend for all drainage conditions. However, at any given saturation, the value of τ for primary drainage was larger than the value for main drainage and that was in turn larger than the value for scanning drainage. Each data set was fit a simple log-linear equation, with different values of fitting parameters. This nonuniqueness of the relationship between τ and saturation and possible causes is discussed.

  6. A simple field test for the assessment of physical fitness.

    DOT National Transportation Integrated Search

    1963-04-01

    An essential factor in air safety is the physical and mental fitness of all personnel directly involved in operations of general, commercial, and military aviation. Standardization and classification of fitness, however, have not been established to ...

  7. Inclusion of fluorophores in cyclodextrins: a closer look at the fluorometric determination of association constants by linear and nonlinear fitting procedures

    NASA Astrophysics Data System (ADS)

    Hutterer, Rudi

    2018-01-01

    The author discusses methods for the fluorometric determination of affinity constants by linear and nonlinear fitting methods. This is outlined in particular for the interaction between cyclodextrins and several anesthetic drugs including benzocaine. Special emphasis is given to the limitations of certain fits, and the impact of such studies on enzyme-substrate interactions are demonstrated. Both the experimental part and methods of analysis are well suited for students in an advanced lab.

  8. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  9. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  10. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  11. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  12. Evaluation of force-velocity and power-velocity relationship of arm muscles.

    PubMed

    Sreckovic, Sreten; Cuk, Ivan; Djuric, Sasa; Nedeljkovic, Aleksandar; Mirkov, Dragan; Jaric, Slobodan

    2015-08-01

    A number of recent studies have revealed an approximately linear force-velocity (F-V) and, consequently, a parabolic power-velocity (P-V) relationship of multi-joint tasks. However, the measurement characteristics of their parameters have been neglected, particularly those regarding arm muscles, which could be a problem for using the linear F-V model in both research and routine testing. Therefore, the aims of the present study were to evaluate the strength, shape, reliability, and concurrent validity of the F-V relationship of arm muscles. Twelve healthy participants performed maximum bench press throws against loads ranging from 20 to 70 % of their maximum strength, and linear regression model was applied on the obtained range of F and V data. One-repetition maximum bench press and medicine ball throw tests were also conducted. The observed individual F-V relationships were exceptionally strong (r = 0.96-0.99; all P < 0.05) and fairly linear, although it remains unresolved whether a polynomial fit could provide even stronger relationships. The reliability of parameters obtained from the linear F-V regressions proved to be mainly high (ICC > 0.80), while their concurrent validity regarding directly measured F, P, and V ranged from high (for maximum F) to medium-to-low (for maximum P and V). The findings add to the evidence that the linear F-V and, consequently, parabolic P-V models could be used to study the mechanical properties of muscular systems, as well as to design a relatively simple, reliable, and ecologically valid routine test of the muscle ability of force, power, and velocity production.

  13. Effects of combined linear and nonlinear periodic training on physical fitness and competition times in finswimmers.

    PubMed

    Yu, Kyung-Hun; Suk, Min-Hwa; Kang, Shin-Woo; Shin, Yun-A

    2014-10-01

    The purpose of this study was to investigate the effect of combined linear and nonlinear periodic training on physical fitness and competition times in finswimmers. The linear resistance training model (6 days/week) and nonlinear underwater training (4 days/week) were applied to 12 finswimmers (age, 16.08± 1.44 yr; career, 3.78± 1.90 yr) for 12 weeks. Body composition measures included weight, body mass index (BMI), percent fat, and fat-free mass. Physical fitness measures included trunk flexion forward, trunk extension backward, sargent jump, 1-repetition-maximum (1 RM) squat, 1 RM dead lift, knee extension, knee flexion, trunk extension, trunk flexion, and competition times. Body composition and physical fitness were improved after the 12-week periodic training program. Weight, BMI, and percent fat were significantly decreased, and trunk flexion forward, trunk extension backward, sargent jump, 1 RM squat, 1 RM dead lift, and knee extension (right) were significantly increased. The 50- and 100-m times significantly decreased in all 12 athletes. After 12 weeks of training, all finswimmers who participated in this study improved their times in a public competition. These data indicate that combined linear and nonlinear periodic training enhanced the physical fitness and competition times in finswimmers.

  14. Individual differences in long-range time representation.

    PubMed

    Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi

    2017-04-01

    On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.

  15. [Equilibrium sorption isotherm for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum].

    PubMed

    Yan, Chang-zhou; Zeng, A-yan; Jin, Xiang-can; Wang, Sheng-rui; Xu, Qiu-jin; Zhao, Jing-zhu

    2006-06-01

    Equilibrium sorption isotherms for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum were studied. Both methods of linear and non-linear fitting were applied to describe the sorption isotherms, and their applicability were analyzed and compared. The results were: (1) The applicability of simulated equation can't be compared only by R2 and chi2 when equilibrium sorption model was used to quantify and contrast the performance of different biosorbents. Both methods of linear and non-linear fitting can be applied in different fitting equations to describe the equilibrium sorption isotherms respectively in order to obtain the actual and credible fitting results, and the fitting equation best accorded with experimental data can be selected; (2) In this experiment, the Langmuir model is more suitable to describe the sorption isotherm of Cu2+ biosorption by H. verticillata and M. spicatum, and there is greater difference between the experimental data and the calculated value of Freundlich model, especially for the linear form of Freundlich model; (3) The content of crude cellulose in dry matter is one of the main factor affecting the biosorption capacity of a submerged aquatic plant, and -OH and -CONH2 groups of polysaccharides on cell wall maybe are active center of biosorption; (4) According to the coefficients qm of the linear form of Langmuir model, the maximum sorption capacity of Cu2+ was found to be 21.55 mg/g and 10.80mg/g for H. verticillata and M. spicatum, respectively. The maximum specific surface area for H. verticillata for binding Cu2+ was 3.23m2/g, and it was 1.62m2/g for M. spicatum.

  16. Correlations for determining thermodynamic properties of hydrogen-helium gas mixtures at temperatures from 7,000 to 35,000 K

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Gnoffo, P. A.; Graves, R. A., Jr.

    1976-01-01

    Simple relations for determining the enthalpy and temperature of hydrogen-helium gas mixtures were developed for hydrogen volumetric compositions from 1.0 to 0.7. These relations are expressed as a function of pressure and density and are valid for a range of temperatures from 7,000 to 35,000 K and pressures from 0.10 to 3.14 MPa. The proportionality constant and exponents in the correlation equations were determined for each gas composition by applying a linear least squares curve fit to a large number of thermodynamic calculations obtained from a detailed computer code. Although these simple relations yielded thermodynamic properties suitable for many engineering applications, their accuracy was improved significantly by evaluating the proportionality constants at postshock conditions and correlating these values as a function of the gas composition and the product of freestream velocity and shock angle. The resulting equations for the proportionality constants in terms of velocity and gas composition and the corresponding simple realtions for enthalpy and temperature were incorporated into a flow field computational scheme. Comparison was good between the thermodynamic properties determined from these relations and those obtained by using a detailed computer code to determine the properties. Thus, an appreciable savings in computer time was realized with no significant loss in accuracy.

  17. Quinolones and tetracyclines in aquaculture fish by a simple and rapid LC-MS/MS method.

    PubMed

    Guidi, Letícia Rocha; Santos, Flávio Alves; Ribeiro, Ana Cláudia S R; Fernandes, Christian; Silva, Luiza H M; Gloria, Maria Beatriz A

    2018-04-15

    The determination of antimicrobials in aquaculture fish is important to ensure food safety. Therefore, simple and fast multiresidue methods are needed. A liquid chromatography tandem mass spectrometry method was developed and validated for the quantification of 14 antimicrobials (quinolones and tetracyclines) in fish. Antimicrobials were extracted with trichloroacetic acid and chromatographic separation was achieved with a C18 column and gradient elution (water and acetonitrile). The method was validated (Decision 2002/657/EC) and it was fit for the purpose. Linearities were established in the matrix and the coefficients of determination were ≥0.98. The method was applied to Nile tilapia and rainbow trout (n = 29) and 14% of them contained enrofloxacin at levels above the limit of quantification (12.53-19.01 µg.kg -1 ) but below the maximum residue limit (100 µg.kg -1 ). Even though prohibited in Brazil and other countries, this antimicrobial reached fish. Measures are needed to ascertain the source of this compound to warrant human safety. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. HOLEGAGE 1.0 - Strain-Gauge Drilling Analysis Program

    NASA Technical Reports Server (NTRS)

    Hampton, Roy V.

    1992-01-01

    Interior stresses inferred from changes in surface strains as hole is drilled. Computes stresses using strain data from each drilled-hole depth layer. Planar stresses computed in three ways: least-squares fit for linear variation with depth, integral method to give incremental stress data for each layer, and/or linear fit to integral data. Written in FORTRAN 77.

  19. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  20. Synchronous fluorescence spectroscopic study of solvatochromic curcumin dye

    NASA Astrophysics Data System (ADS)

    Patra, Digambara; Barakat, Christelle

    2011-09-01

    Curcumin, the main yellow bioactive component of turmeric, has recently acquired attention by chemists due its wide range of potential biological applications as an antioxidant, an anti-inflammatory, and an anti-carcinogenic agent. This molecule fluoresces weakly and poorly soluble in water. In this detailed study of curcumin in thirteen different solvents, both the absorption and fluorescence spectra of curcumin was found to be broad, however, a narrower and simple synchronous fluorescence spectrum of curcumin was obtained at Δ λ = 10-20 nm. Lippert-Mataga plot of curcumin in different solvents illustrated two sets of linearity which is consistent with the plot of Stokes' shift vs. the ET30. When Stokes's shift in wavenumber scale was replaced by synchronous fluorescence maximum in nanometer scale, the solvent polarity dependency measured by λSFSmax vs. Lippert-Mataga plot or ET30 values offered similar trends as measured via Stokes' shift for protic and aprotic solvents for curcumin. Better linear correlation of λSFSmax vs. π* scale of solvent polarity was found compared to λabsmax or λemmax or Stokes' shift measurements. In Stokes' shift measurement both absorption/excitation as well as emission (fluorescence) spectra are required to compute the Stokes' shift in wavenumber scale, but measurement could be done in a very fast and simple way by taking a single scan of SFS avoiding calculation and obtain information about polarity of the solvent. Curcumin decay properties in all the solvents could be fitted well to a double-exponential decay function.

  1. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  2. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  3. Five ab initio potential energy and dipole moment surfaces for hydrated NaCl and NaF. I. Two-body interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yimin, E-mail: yimin.wang@emory.edu; Bowman, Joel M., E-mail: jmbowma@emory.edu; Kamarchik, Eugene, E-mail: eugene.kamarchik@gmail.com

    2016-03-21

    We report full-dimensional, ab initio-based potentials and dipole moment surfaces for NaCl, NaF, Na{sup +}H{sub 2}O, F{sup −}H{sub 2}O, and Cl{sup −}H{sub 2}O. The NaCl and NaF potentials are diabatic ones that dissociate to ions. These are obtained using spline fits to CCSD(T)/aug-cc-pV5Z energies. In addition, non-linear least square fits using the Born-Mayer-Huggins potential are presented, providing accurate parameters based strictly on the current ab initio energies. The long-range behavior of the NaCl and NaF potentials is shown to go, as expected, accurately to the point-charge Coulomb interaction. The three ion-H{sub 2}O potentials are permutationally invariant fits to roughly 20 000more » coupled cluster CCSD(T) energies (awCVTZ basis for Na{sup +} and aVTZ basis for Cl{sup −} and F{sup −}), over a large range of distances and H{sub 2}O intramolecular configurations. These potentials are switched accurately in the long range to the analytical ion-dipole interactions, to improve computational efficiency. Dipole moment surfaces are fits to MP2 data; for the ion-ion cases, these are well described in the intermediate- and long-range by the simple point-charge expression. The performance of these new fits is examined by direct comparison to additional ab initio energies and dipole moments along various cuts. Equilibrium structures, harmonic frequencies, and electronic dissociation energies are also reported and compared to direct ab initio results. These indicate the high fidelity of the new PESs.« less

  4. A novel polarization demodulation method using polarization beam splitter (PBS) for dynamic pressure sensor

    NASA Astrophysics Data System (ADS)

    Su, Yang; Zhou, Hua; Wang, Yiming; Shen, Huiping

    2018-03-01

    In this paper we propose a new design to demodulate polarization properties induced by pressure using a PBS (polarization beam splitter), which is different with traditional polarimeter based on the 4-detector polarization measurement approach. The theoretical model is established by Muller matrix method. Experimental results confirm the validity of our analysis. Proportional relationships and linear fit are found between output signal and applied pressure. A maximum sensitivity of 0.092182 mv/mv is experimentally achieved and the frequency response exhibits a <0.14 dB variation across the measurement bandwidth. The sensitivity dependence on incident SOP (state of polarization) is investigated. The simple and all-fiber configuration, low-cost and high speed potential make it promising for fiber-based dynamic pressure sensing.

  5. Computational prediction of the pKas of small peptides through Conceptual DFT descriptors

    NASA Astrophysics Data System (ADS)

    Frau, Juan; Hernández-Haro, Noemí; Glossman-Mitnik, Daniel

    2017-03-01

    The experimental pKa of a group of simple amines have been plotted against several Conceptual DFT descriptors calculated by means of different density functionals, basis sets and solvation schemes. It was found that the best fits are those that relate the pKa of the amines with the global hardness η through the MN12SX density functional in connection with the Def2TZVP basis set and the SMD solvation model, using water as a solvent. The parameterized equation resulting from the linear regression analysis has then been used for the prediction of the pKa of small peptides of interest in the study of diabetes and Alzheimer disease. The accuracy of the results is relatively good, with a MAD of 0.36 units of pKa.

  6. Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.

    PubMed

    Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth

    2017-05-04

    Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the  Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.

  7. Techniques for estimating selected streamflow characteristics of rural unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Whitehead, Matthew T.

    2002-01-01

    This report provides equations for estimating mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and streamflow quartiles (the 25th-, 50th-, and 75th-percentile streamflows) as a function of selected basin characteristics for rural, unregulated streams in Ohio. The equations were developed from streamflow statistics and basin-characteristics data for as many as 219 active or discontinued streamflow-gaging stations on rural, unregulated streams in Ohio with 10 or more years of homogenous daily streamflow record. Streamflow statistics and basin-characteristics data for the 219 stations are presented in this report. Simple equations (based on drainage area only) and best-fit equations (based on drainage area and at least two other basin characteristics) were developed by means of ordinary least-squares regression techniques. Application of the best-fit equations generally involves quantification of basin characteristics that require or are facilitated by use of a geographic information system. In contrast, the simple equations can be used with information that can be obtained without use of a geographic information system; however, the simple equations have larger prediction errors than the best-fit equations and exhibit geographic biases for most streamflow statistics. The best-fit equations should be used instead of the simple equations whenever possible.

  8. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  9. Stress state in turbopump bearing induced by shrink fitting

    NASA Technical Reports Server (NTRS)

    Sims, P.; Zee, R.

    1991-01-01

    The stress generated by shrink fitting in bearing-like geometries is studied. The feasibility of using strain gages to determine the strain induced by shrink fitting process is demonstrated. Results from a ring with a uniform cross section reveal the validity of simple stress mechanics calculations for determining the stress state induced in this geometry by shrink fitting.

  10. Simple taper: Taper equations for the field forester

    Treesearch

    David R. Larsen

    2017-01-01

    "Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...

  11. Using Simple Linear Regression to Assess the Success of the Montreal Protocol in Reducing Atmospheric Chlorofluorocarbons

    ERIC Educational Resources Information Center

    Nelson, Dean

    2009-01-01

    Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…

  12. The CAHPER Fitness-Performance Test Manual: For Boys and Girls 7 to 17 Years of Age.

    ERIC Educational Resources Information Center

    Canadian Association for Health, Physical Education, and Recreation, Ottawa (Ontario).

    Outlined in this manual is Canada's first National Test of Physical Fitness. Each test item is a valid and reliable measure of fitness, simple enough for any teacher not trained in fitness measurement to administer. Each of the six tests measures a different aspect of fitness: (1) the one-minute speed sit-up tests the strength and endurance of the…

  13. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  14. Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.

    2018-05-01

    We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.

  15. VRF ("Visual RobFit") — nuclear spectral analysis with non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Lasche, George; Coldwell, Robert; Metzger, Robert

    2017-09-01

    A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.

  16. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  17. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  18. Projectile penetration into ballistic gelatin.

    PubMed

    Swain, M V; Kieser, D C; Shah, S; Kieser, J A

    2014-01-01

    Ballistic gelatin is frequently used as a model for soft biological tissues that experience projectile impact. In this paper we investigate the response of a number of gelatin materials to the penetration of spherical steel projectiles (7 to 11mm diameter) with a range of lower impacting velocities (<120m/s). The results of sphere penetration depth versus projectile velocity are found to be linear for all systems above a certain threshold velocity required for initiating penetration. The data for a specific material impacted with different diameter spheres were able to be condensed to a single curve when the penetration depth was normalised by the projectile diameter. When the results are compared with a number of predictive relationships available in the literature, it is found that over the range of projectiles and compositions used, the results fit a simple relationship that takes into account the projectile diameter, the threshold velocity for penetration into the gelatin and a value of the shear modulus of the gelatin estimated from the threshold velocity for penetration. The normalised depth is found to fit the elastic Froude number when this is modified to allow for a threshold impact velocity. The normalised penetration data are found to best fit this modified elastic Froude number with a slope of 1/2 instead of 1/3 as suggested by Akers and Belmonte (2006). Possible explanations for this difference are discussed. © 2013 Published by Elsevier Ltd.

  19. Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform

    PubMed Central

    Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.

    2015-01-01

    This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037

  20. Elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation.

    PubMed

    Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen

    2016-11-01

    Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.

  1. Application of a first impression triage in the Japan railway west disaster.

    PubMed

    Hashimoto, Atsunori; Ueda, Takahiro; Kuboyama, Kazutoshi; Yamada, Taihei; Terashima, Mariko; Miyawaki, Atsushi; Nakao, Atsunori; Kotani, Joji

    2013-01-01

    On April 25, 2005, a Japanese express train derailed into a building, resulting in 107 deaths and 549 injuries. We used "First Impression Triage (FIT)", our new triage strategy based on general inspection and palpation without counting pulse/respiratory rates, and determined the feasibility of FIT in the chaotic situation of treating a large number of injured people in a brief time period. The subjects included 39 patients who required hospitalization among 113 victims transferred to our hospital. After initial assessment with FIT by an emergency physician, patients were retrospectively reassessed with the preexisting the modified Simple Triage and Rapid Treatment (START) methodology, based on Injury Severity Score, probability of survival, and ICU stay. FIT resulted in shorter waiting time for triage. FIT designations comprised 11 red (immediate), 28 yellow (delayed), while START assigned six to red and 32 to yellow. There were no statistical differences between FIT and START in the accuracy rate calculated by means of probability of survival and ICU stay. Overall validity and reliability of FIT determined by outcome assessment were similar to those of START. FIT would be a simple and accurate technique to quickly triage a large number of patients.

  2. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  3. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals

    PubMed Central

    Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935

  4. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    PubMed

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  5. Definitions: Health, Fitness, and Physical Activity.

    ERIC Educational Resources Information Center

    Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don

    2000-01-01

    This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…

  6. Health-Related Fitness and Young Children.

    ERIC Educational Resources Information Center

    Gabbard, Carl; LeBlanc, Betty

    Because research indicates that American youth have become fatter since the 1960's, the development of fitness among young children should not be left to chance. Simple games, rhythms, and dance are not sufficient to insure fitness, for, during the regular free play situation, children very seldom experience physical activity of enough intensity…

  7. Understanding the relationship between duration of untreated psychosis and outcomes: A statistical perspective.

    PubMed

    Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary

    2017-06-14

    Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.

  8. On the transmit field inhomogeneity correction of relaxation‐compensated amide and NOE CEST effects at 7 T

    PubMed Central

    Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans

    2017-01-01

    High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824

  9. The use of nomograms in LDR-HDR prostate brachytherapy.

    PubMed

    Pujades, Ma Carmen; Camacho, Cristina; Perez-Calatayud, Jose; Richart, José; Gimeno, Jose; Lliso, Françoise; Carmona, Vicente; Ballester, Facundo; Crispín, Vicente; Rodríguez, Silvia; Tormo, Alejandro

    2011-09-01

    The common use of nomograms in Low Dose Rate (LDR) permanent prostate brachytherapy (BT) allows to estimate the number of seeds required for an implant. Independent dosimetry verification is recommended for each clinical dosimetry in BT. Also, nomograms can be useful for dose calculation quality assurance and they could be adapted to High Dose Rate (HDR). This work sets nomograms for LDR and HDR prostate-BT implants, which are applied to three different institutions that use different implant techniques. Patients treated throughout 2010 till April 2011 were considered for this study. This example was chosen to be the representative of the latest implant techniques and to ensure consistency in the planning. A sufficient number of cases for both BT modalities, prescription dose and different work methodology (depending on the institution) were taken into account. The specific nomograms were built using the correlation between the prostate volume and some characteristic parameters of each BT modality, such as the source Air Kerma Strength, number of implanted seeds in LDR or total radiation time in HDR. For each institution and BT modality, nomograms normalized to the prescribed dose were obtained and fitted to a linear function. The parameters of the adjustment show a good agreement between data and the fitting. It should be noted that for each institution these linear function parameters are different, indicating that each centre should construct its own nomograms. Nomograms for LDR and HDR prostate brachytherapy are simple quality assurance tools, specific for each institution. Nevertheless, their use should be complementary to the necessary independent verification.

  10. The use of nomograms in LDR-HDR prostate brachytherapy

    PubMed Central

    Camacho, Cristina; Perez-Calatayud, Jose; Richart, José; Gimeno, Jose; Lliso, Françoise; Carmona, Vicente; Ballester, Facundo; Crispín, Vicente; Rodríguez, Silvia; Tormo, Alejandro

    2011-01-01

    Purpose The common use of nomograms in Low Dose Rate (LDR) permanent prostate brachytherapy (BT) allows to estimate the number of seeds required for an implant. Independent dosimetry verification is recommended for each clinical dosimetry in BT. Also, nomograms can be useful for dose calculation quality assurance and they could be adapted to High Dose Rate (HDR). This work sets nomograms for LDR and HDR prostate-BT implants, which are applied to three different institutions that use different implant techniques. Material and methods Patients treated throughout 2010 till April 2011 were considered for this study. This example was chosen to be the representative of the latest implant techniques and to ensure consistency in the planning. A sufficient number of cases for both BT modalities, prescription dose and different work methodology (depending on the institution) were taken into account. The specific nomograms were built using the correlation between the prostate volume and some characteristic parameters of each BT modality, such as the source Air Kerma Strength, number of implanted seeds in LDR or total radiation time in HDR. Results For each institution and BT modality, nomograms normalized to the prescribed dose were obtained and fitted to a linear function. The parameters of the adjustment show a good agreement between data and the fitting. It should be noted that for each institution these linear function parameters are different, indicating that each centre should construct its own nomograms. Conclusions Nomograms for LDR and HDR prostate brachytherapy are simple quality assurance tools, specific for each institution. Nevertheless, their use should be complementary to the necessary independent verification. PMID:23346120

  11. Using Linear and Non-Linear Temporal Adjustments to Align Multiple Phenology Curves, Making Vegetation Status and Health Directly Comparable

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.

    2017-12-01

    National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.

  12. Temporal and radial variation of the solar wind temperature-speed relationship

    NASA Astrophysics Data System (ADS)

    Elliott, H. A.; Henney, C. J.; McComas, D. J.; Smith, C. W.; Vasquez, B. J.

    2012-09-01

    The solar wind temperature (T) and speed (V) are generally well correlated at ˜1 AU, except in Interplanetary Coronal Mass Ejections where this correlation breaks down. We perform a comprehensive analysis of both the temporal and radial variation in the temperature-speed (T-V) relationship of the non-transient wind, and our analysis provides insight into both the causes of the T-V relationship and the sources of the temperature variability. Often at 1 AU the speed-temperature relationship is well represented by a single linear fit over a speed range spanning both the slow and fast wind. However, at times the fast wind from coronal holes can have a different T-V relationship than the slow wind. A good example of this was in 2003 when there was a very large and long-lived outward magnetic polarity coronal hole at low latitudes that emitted wind with speeds as fast as a polar coronal hole. The long-lived nature of the hole made it possible to clearly distinguish that some holes can have a different T-V relationship. In an earlier ACE study, we found that both the compressions and rarefactions T-V curves are linear, but the compression curve is shifted to higher temperatures. By separating compressions and rarefactions prior to determining the radial profiles of the solar wind parameters, the importance of dynamic interactions on the radial evolution of the solar wind parameters is revealed. Although the T-V relationship at 1 AU is often well described by a single linear curve, we find that the T-V relationship continually evolves with distance. Beyond ˜2.5 AU the differences between the compressions and rarefactions are quite significant and affect the shape of the overall T-V distribution to the point that a simple linear fit no longer describes the distribution well. Since additional heating of the ambient solar wind outside of interaction regions can be associated with Alfvénic fluctuations and the turbulent energy cascade, we also estimate the heating rate radial profile from the solar wind speed and temperature measurements.

  13. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  14. SU-F-BRD-16: Relative Biological Effectiveness of Double-Strand Break Induction for Modeling Cell Survival in Pristine Proton Beams of Different Dose-Averaged Linear Energy Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX

    2015-06-15

    Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less

  15. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  16. The Dynamic Characteristic and Hysteresis Effect of an Air Spring

    NASA Astrophysics Data System (ADS)

    Löcken, F.; Welsch, M.

    2015-02-01

    In many applications of vibration technology, especially in chassis, air springs present a common alternative to steel spring concepts. A design-independent and therefore universal approach is presented to describe the dynamic characteristic of such springs. Differential and constitutive equations based on energy balances of the enclosed volume and the mountings are given to describe the nonlinear and dynamic characteristics. Therefore all parameters can be estimated directly from physical and geometrical properties, without parameter fitting. The numerically solved equations fit very well to measurements of a passenger car air spring. In a second step a simplification of this model leads to a pure mechanical equation. While in principle the same parameters are used, just an empirical correction of the effective heat transfer coefficient is needed to handle some simplification on this topic. Finally, a linearization of this equation leads to an analogous mechanical model that can be assembled from two common spring- and one dashpot elements in a specific arrangement. This transfer into "mechanical language" enables a system description with a simple force-displacement law and a consideration of the nonobvious hysteresis and stiffness increase of an air spring from a mechanical point of view.

  17. Partitioning degrees of freedom in hierarchical and other richly-parameterized models.

    PubMed

    Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P

    2010-02-01

    Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.

  18. Rapid and reliable QuEChERS-based LC-MS/MS method for determination of acrylamide in potato chips and roasted coffee

    NASA Astrophysics Data System (ADS)

    Stefanović, S.; Đorđevic, V.; Jelušić, V.

    2017-09-01

    The aim of this paper is to verify the performance characteristics and fitness for purpose of rapid and simple QuEChERS-based LC-MS/MS method for determination of acrylamide in potato chips and coffee. LC-MS/MS is by far the most suitable analytical technique for acrylamide measurements given its inherent sensitivity and selectivity, as well as capability of analyzing underivatized molecule. Acrylamide in roasted coffee and potato chips wasextracted with water:acetonitrile mixture using NaCl and MgSO4. Cleanup was carried out with MgSO4 and PSA. Obtained results were satisfactory. Recoveries were in the range of 85-112%, interlaboratory reproducibility (Cv) was 5.8-7.6% and linearity (R2) was in the range of 0.995-0.999. LoQ was 35 μg kg-1 for coffee and 20 μg kg-1 for potato chips. Performance characteristic of the method are compliant with criteria for analytical methods validation. Presented method for quantitative determination of acrylamide in roasted coffee and potato chips is fit for purposes of self-control in food industry as well as regulatory controls carried out by the governmental agencies.

  19. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, Richard P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end‐members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end‐members, an extension of the mathematics of mixing models is presented that assesses the “fit” of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end‐members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end‐members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  20. First L-Band Interferometric Observations of a Young Stellar Object: Probing the Circumstellar Environment of MWC 419

    NASA Astrophysics Data System (ADS)

    Ragland, S.; Akeson, R. L.; Armandroff, T.; Colavita, M. M.; Danchi, W. C.; Hillenbrand, L. A.; Millan-Gabet, R.; Ridgway, S. T.; Traub, W. A.; Vasisht, G.; Wizinowich, P. L.

    2009-09-01

    We present spatially resolved K- and L-band spectra (at spectral resolution R = 230 and R = 60, respectively) of MWC 419, a Herbig Ae/Be star. The data were obtained simultaneously with a new configuration of the 85 m baseline Keck Interferometer. Our observations are sensitive to the radial distribution of temperature in the inner region of the disk of MWC 419. We fit the visibility data with both simple geometric and more physical disk models. The geometric models (uniform disk and Gaussian) show that the apparent size increases linearly with wavelength in the 2-4 μm wavelength region, suggesting that the disk is extended with a temperature gradient. A model having a power-law temperature gradient with radius simultaneously fits our interferometric measurements and the spectral energy distribution data from the literature. The slope of the power law is close to that expected from an optically thick disk. Our spectrally dispersed interferometric measurements include the Br γ emission line. The measured disk size at and around Br γ suggests that emitting hydrogen gas is located inside (or within the inner regions) of the dust disk.

  1. Viscosity and Structure of CaO-SiO2-P2O5-FetO System with Varying P2O5 and FeO Content

    NASA Astrophysics Data System (ADS)

    Diao, Jiang; Gu, Pan; Liu, De-Man; Jiang, Lu; Wang, Cong; Xie, Bing

    2017-10-01

    A rotary viscosimeter and Raman spectrum were employed to measure the viscosity and structural information of the CaO-SiO2-P2O5-FetO system at 1673 K. The experimental data have been compared with the calculated results using different viscosity models. It shows that the National Physical Laboratory (NPL) and Pal models fit the CaO-SiO2-P2O5-FeOt system better. With the P2O5 content increasing from 5% to 14%, the viscosity increases from 0.12 Pa s to 0.27 Pa s. With the FeO content increasing from 30% to 40%, the viscosity decreases from 0.21 Pa s to 0.12 Pa s. Increasing FeO content makes the complicated molten melts become simple, and increasing P2O5 content will complicate the molten melts. The linear relation between viscosity and structure parameter Q(Si + P) was obtained by regression analysis. The calculated viscosity by using the optimized NPL and Pal model are almost identical with the fitted values.

  2. Using Stocking or Harvesting to Reverse Period-Doubling Bifurcations in Discrete Population Models

    Treesearch

    James F. Selgrade

    1998-01-01

    This study considers a general class of 2-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Four sets of necessary...

  3. Reversing Period-Doubling Bifurcations in Models of Population Interactions Using Constant Stocking or Harvesting

    Treesearch

    James F. Selgrade; James H. Roberds

    1998-01-01

    This study considers a general class of two-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Conditions are derived...

  4. Feasibility of Using Linearly Polarized Rotating Birdcage Transmitters and Close-Fitting Receive Arrays in MRI to Reduce SAR in the Vicinity of Deep Brain Simulation Implants

    PubMed Central

    Golestanirad, Laleh; Keil, Boris; Angelone, Leonardo M.; Bonmassar, Giorgio; Mareyam, Azma; Wald, Lawrence L.

    2016-01-01

    Purpose MRI of patients with deep brain stimulation (DBS) implants is strictly limited due to safety concerns, including high levels of local specific absorption rate (SAR) of radiofrequency (RF) fields near the implant and related RF-induced heating. This study demonstrates the feasibility of using a rotating linearly polarized birdcage transmitter and a 32-channel close-fit receive array to significantly reduce local SAR in MRI of DBS patients. Methods Electromagnetic simulations and phantom experiments were performed with generic DBS lead geometries and implantation paths. The technique was based on mechanically rotating a linear birdcage transmitter to align its zero electric-field region with the implant while using a close-fit receive array to significantly increase signal to noise ratio of the images. Results It was found that the zero electric-field region of the transmitter is thick enough at 1.5 Tesla to encompass DBS lead trajectories with wire segments that were up to 30 degrees out of plane, as well as leads with looped segments. Moreover, SAR reduction was not sensitive to tissue properties, and insertion of a close-fit 32-channel receive array did not degrade the SAR reduction performance. Conclusion The ensemble of rotating linear birdcage and 32-channel close-fit receive array introduces a promising technology for future improvement of imaging in patients with DBS implants. PMID:27059266

  5. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  6. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  7. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  9. Pattern formation in individual-based systems with time-varying parameters

    NASA Astrophysics Data System (ADS)

    Ashcroft, Peter; Galla, Tobias

    2013-12-01

    We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.

  10. Oscillations of end loaded cantilever beams

    NASA Astrophysics Data System (ADS)

    Macho-Stadler, E.; Elejalde-García, M. J.; Llanos-Vázquez, R.

    2015-09-01

    This article presents several simple experiments based on changing transverse vibration frequencies in a cantilever beam, when acted on by an external attached mass load at the free end. By using a mechanical wave driver, available in introductory undergraduate laboratories, we provide various experimental results for end loaded cantilever beams that fit reasonably well into a linear equation. The behaviour of the cantilever beam’s weak-damping resonance response is studied for the case of metal resonance strips. As the mass load increases, a more pronounced decrease occurs in the fundamental frequency of beam vibration. It is important to note that cantilever construction is often used in architectural design and engineering construction projects but current analysis also predicts the influence of mass load on the sound generated by musical free reeds with boundary conditions similar to a cantilever beam.

  11. Determining the Pressure Shift of Helium I Lines Using White Dwarf Stars

    NASA Astrophysics Data System (ADS)

    Camarota, Lawrence

    This dissertation explores the non-Doppler shifting of Helium lines in the high pressure conditions of a white dwarf photosphere. In particular, this dissertation seeks to mathematically quantify the shift in a way that is simple to reproduce and account for in future studies without requiring prior knowledge of the star's bulk properties (mass, radius, temperature, etc.). Two main methods will be used in this analysis. First, the spectral line will be quantified with a continuous wavelet transformation, and the components will be used in a chi2 minimizing linear regression to predict the shift. Second, the position of the lines will be calculated using a best-fit Levy-alpha line function. These techniques stand in contrast to traditional methods of quantifying the center of often broad spectral lines, which usually assume symmetry on the parts of the lines.

  12. Calibrating White Dwarf Asteroseismic Fitting Techniques

    NASA Astrophysics Data System (ADS)

    Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.

    2017-03-01

    The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.

  13. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  14. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  16. The Type Ia Supernova Color-Magnitude Relation and Host Galaxy Dust: A Simple Hierarchical Bayesian Model

    NASA Astrophysics Data System (ADS)

    Mandel, Kaisey; Scolnic, Daniel; Shariff, Hikmatali; Foley, Ryan; Kirshner, Robert

    2017-01-01

    Inferring peak optical absolute magnitudes of Type Ia supernovae (SN Ia) from distance-independent measures such as their light curve shapes and colors underpins the evidence for cosmic acceleration. SN Ia with broader, slower declining optical light curves are more luminous (“broader-brighter”) and those with redder colors are dimmer. But the “redder-dimmer” color-luminosity relation widely used in cosmological SN Ia analyses confounds its two separate physical origins. An intrinsic correlation arises from the physics of exploding white dwarfs, while interstellar dust in the host galaxy also makes SN Ia appear dimmer and redder. Conventional SN Ia cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (MB vs. B-V) slope βint differs from the host galaxy dust law RB, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from βint in the blue tail to RB in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope βapp between βint and RB. We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a dataset of SALT2 optical light curve fits of 277 nearby SN Ia at z < 0.10. The conventional linear fit obtains βapp ≈ 3. Our model finds a βint = 2.2 ± 0.3 and a distinct dust law of RB = 3.7 ± 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ~0.10 mag in the tails of the apparent color distribution. This research is supported by NSF grants AST-156854, AST-1211196, and NASA grant NNX15AJ55G.

  17. Simple Harmonics Motion experiment based on LabVIEW interface for Arduino

    NASA Astrophysics Data System (ADS)

    Tong-on, Anusorn; Saphet, Parinya; Thepnurat, Meechai

    2017-09-01

    In this work, we developed an affordable modern innovative physics lab apparatus. The ultrasonic sensor is used to measure the position of a mass attached on a spring as a function of time. The data acquisition system and control device were developed based on LabVIEW interface for Arduino UNO R3. The experiment was designed to explain wave propagation which is modeled by simple harmonic motion. The simple harmonic system (mass and spring) was observed and the motion can be realized using curve fitting to the wave equation in Mathematica. We found that the spring constants provided by Hooke’s law and the wave equation fit are 9.9402 and 9.1706 N/m, respectively.

  18. LADES: a software for constructing and analyzing longitudinal designs in biomedical research.

    PubMed

    Vázquez-Alcocer, Alan; Garzón-Cortes, Daniel Ladislao; Sánchez-Casas, Rosa María

    2014-01-01

    One of the most important steps in biomedical longitudinal studies is choosing a good experimental design that can provide high accuracy in the analysis of results with a minimum sample size. Several methods for constructing efficient longitudinal designs have been developed based on power analysis and the statistical model used for analyzing the final results. However, development of this technology is not available to practitioners through user-friendly software. In this paper we introduce LADES (Longitudinal Analysis and Design of Experiments Software) as an alternative and easy-to-use tool for conducting longitudinal analysis and constructing efficient longitudinal designs. LADES incorporates methods for creating cost-efficient longitudinal designs, unequal longitudinal designs, and simple longitudinal designs. In addition, LADES includes different methods for analyzing longitudinal data such as linear mixed models, generalized estimating equations, among others. A study of European eels is reanalyzed in order to show LADES capabilities. Three treatments contained in three aquariums with five eels each were analyzed. Data were collected from 0 up to the 12th week post treatment for all the eels (complete design). The response under evaluation is sperm volume. A linear mixed model was fitted to the results using LADES. The complete design had a power of 88.7% using 15 eels. With LADES we propose the use of an unequal design with only 14 eels and 89.5% efficiency. LADES was developed as a powerful and simple tool to promote the use of statistical methods for analyzing and creating longitudinal experiments in biomedical research.

  19. Synchronous fluorescence spectroscopic study of solvatochromic curcumin dye.

    PubMed

    Patra, Digambara; Barakat, Christelle

    2011-09-01

    Curcumin, the main yellow bioactive component of turmeric, has recently acquired attention by chemists due its wide range of potential biological applications as an antioxidant, an anti-inflammatory, and an anti-carcinogenic agent. This molecule fluoresces weakly and poorly soluble in water. In this detailed study of curcumin in thirteen different solvents, both the absorption and fluorescence spectra of curcumin was found to be broad, however, a narrower and simple synchronous fluorescence spectrum of curcumin was obtained at Δλ=10-20 nm. Lippert-Mataga plot of curcumin in different solvents illustrated two sets of linearity which is consistent with the plot of Stokes' shift vs. the ET30. When Stokes's shift in wavenumber scale was replaced by synchronous fluorescence maximum in nanometer scale, the solvent polarity dependency measured by λSFSmax vs. Lippert-Mataga plot or ET30 values offered similar trends as measured via Stokes' shift for protic and aprotic solvents for curcumin. Better linear correlation of λSFSmax vs. π* scale of solvent polarity was found compared to λabsmax or λemmax or Stokes' shift measurements. In Stokes' shift measurement both absorption/excitation as well as emission (fluorescence) spectra are required to compute the Stokes' shift in wavenumber scale, but measurement could be done in a very fast and simple way by taking a single scan of SFS avoiding calculation and obtain information about polarity of the solvent. Curcumin decay properties in all the solvents could be fitted well to a double-exponential decay function. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  1. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  2. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  3. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. Continuous relaxation and retardation spectrum method for viscoelastic characterization of asphalt concrete

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.

    2012-08-01

    This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.

  5. Synchronous-digitization for Video Rate Polarization Modulated Beam Scanning Second Harmonic Generation Microscopy.

    PubMed

    Sullivan, Shane Z; DeWalt, Emma L; Schmitt, Paul D; Muir, Ryan M; Simpson, Garth J

    2015-03-09

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  6. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  7. The Evolution of El Nino-Precipitation Relationships from Satellites and Gauges

    NASA Technical Reports Server (NTRS)

    Curtis, Scott; Adler, Robert F.; Starr, David OC (Technical Monitor)

    2002-01-01

    This study uses a twenty-three year (1979-2001) satellite-gauge merged community data set to further describe the relationship between El Nino Southern Oscillation (ENSO) and precipitation. The globally complete precipitation fields reveal coherent bands of anomalies that extend from the tropics to the polar regions. Also, ENSO-precipitation relationships were analyzed during the six strongest El Ninos from 1979 to 2001. Seasons of evolution, Pre-onset, Onset, Peak, Decay, and Post-decay, were identified based on the strength of the El Nino. Then two simple and independent models, first order harmonic and linear, were fit to the monthly time series of normalized precipitation anomalies for each grid block. The sinusoidal model represents a three-phase evolution of precipitation, either dry-wet-dry or wet-dry-wet. This model is also highly correlated with the evolution of sea surface temperatures in the equatorial Pacific. The linear model represents a two-phase evolution of precipitation, either dry-wet or wet-dry. These models combine to account for over 50% of the precipitation variability for over half the globe during El Nino. Most regions, especially away from the Equator, favor the linear model. Areas that show the largest trend from dry to wet are southeastern Australia, eastern Indian Ocean, southern Japan, and off the coast of Peru. The northern tropical Pacific and Southeast Asia show the opposite trend.

  8. Characterization of perpendicular STT-MRAM by spin torque ferromagnetic resonance

    NASA Astrophysics Data System (ADS)

    Sha, Chengcen; Yang, Liu; Lee, Han Kyu; Barsukov, Igor; Zhang, Jieyi; Krivorotov, Ilya

    We describe a method for simple quantitative measurement of magnetic anisotropy and Gilbert damping of the MTJ free layer in individual perpendicular STT-MRAM devices by spin torque ferromagnetic resonance (ST-FMR) with magnetic field modulation. We first show the dependence of ST-FMR spectra of an STT-MRAM element on out-of-plane magnetic field. In these spectra, resonances arising from excitation of the quasi-uniform and higher order spin wave eigenmodes of the free layer as well as acoustic mode of the synthetic antiferromagnet (SAF) are clearly seen. The quasi-uniform mode frequency at zero field gives magnetic anisotropy field of the free layer. Then we show dependence of the quasi-uniform mode linewidth on frequency is linear over a range of frequencies but deviatesfrom linearity in the low and high frequency regimes. Comparison to ST-FMR spectrareveals that the high frequency line broadening is linked to the SAF mode softening near the SAF spin flop transition at 5 kG. In the low field regime, the SAF mode frequency approaches that of the quasi-uniform mode, and resonant coupling of the modes leads to the line broadening. A linear fit to the linewidth data outside of the high and low field regimes gives the Gilbert damping parameter of the free layer. This work was supported by the Samsung Global MRAM Innovation Program.

  9. Calibrating the Decline Rate - Peak Luminosity Relation for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Rust, Bert W.; Pruzhinskaya, Maria V.; Thijsse, Barend J.

    2015-08-01

    The correlation between peak luminosity and rate of decline in luminosity for Type I supernovae was first studied by B. W. Rust [Ph.D. thesis, Univ. of Illinois (1974) ORNL-4953] and Yu. P. Pskovskii [Sov. Astron., 21 (1977) 675] in the 1970s. Their work was little-noted until Phillips rediscovered the correlation in 1993 [ApJ, 413 (1993) L105] and attempted to derive a calibration relation using a difference quotient approximation Δm15(B) to the decline rate after peak luminosity Mmax(B). Numerical differentiation of data containing measuring errors is a notoriously unstable calculation, but Δm15(B) remains the parameter of choice for most calibration methods developed since 1993. To succeed, it should be computed from good functional fits to the lightcurves, but most workers never exhibit their fits. In the few instances where they have, the fits are not very good. Some of the 9 supernovae in the Phillips study required extinction corrections in their estimates of the Mmax(B), and so were not appropriate for establishing a calibration relation. Although the relative uncertainties in his Δm15(B) estimates were comparable to those in his Mmax(B) estimates, he nevertheless used simple linear regression of the latter on the former, rather than major-axis regression (total least squares) which would have been more appropriate.Here we determine some new calibration relations using a sample of nearby "pure" supernovae suggested by M. V. Pruzhinskaya [Astron. Lett., 37 (2011) 663]. Their parent galaxies are all in the NED collection, with good distance estimates obtained by several different methods. We fit each lightcurve with an optimal regression spline obtained by B. J. Thijsse's spline2 [Comp. in Sci. & Eng., 10 (2008) 49]. The fits, which explain more that 99% of the variance in each case, are better than anything heretofore obtained by stretching "template" lightcurves or fitting combinations of standard lightcurves. We use the fits to compute estimates of Δm15(B) and some other calibration parameters suggested by Pskovskii [Sov. Astron., 28 (1984) 858] and compare their utility for cosmological testing.

  10. Genetic Obesity Risk and Attenuation Effect of Physical Fitness in Mexican-Mestizo Population: a Case-Control Study.

    PubMed

    Costa-Urrutia, Paula; Abud, Carolina; Franco-Trecu, Valentina; Colistro, Valentina; Rodríguez-Arellano, Martha Eunice; Vázquez-Pérez, Joel; Granados, Julio; Seelaender, Marilia

    2017-05-01

    We analyzed commonly reported European and Asian obesity-related gene variants in a Mexican-Mestizo population through each single nucleotide polymorphism (SNP) and a genetic risk score (GRS) based on 23 selected SNPs. Study subjects were physically active Mexican-Mestizo adults (n  =  608) with body mass index (BMI) values from 18 to 55 kg/m 2 . For each SNP and for the GRS, logistic models were performed to test for simple SNP associations with BMI, fat mass percentage (FMP), waist circumference (WC), and the interaction with VO 2max and muscular endurance (ME). To further understand the SNP or GRS*physical fitness components, generalized linear models were performed. Obesity risk was significantly associated to 6 SNPs (ADRB2 rs1042713, APOB rs512535, PPARA rs1800206, TNFA rs361525, TRHR rs7832552 and rs16892496) after adjustment by gender, age, ancestry, VO 2max , and ME. ME attenuated the influence of APOB rs512535 and TNFA rs361525 on obesity risk in FMP. WC was significantly associated to GRS. Both ME and VO 2max attenuated GRS effect on WC. We report associations for 6 out of 23 SNPs and for the GRS, which confer obesity risk, a novel finding for Mexican-Mestizo physically active population. Also, the importance of including physical fitness components variables in obesity genetic risk studies is highlighted, with special regard to intervention purposes. © 2017 John Wiley & Sons Ltd/University College London.

  11. A school-based physical activity promotion intervention in children: rationale and study protocol for the PREVIENE Project.

    PubMed

    Tercedor, Pablo; Villa-González, Emilio; Ávila-García, Manuel; Díaz-Piedra, Carolina; Martínez-Baena, Alejandro; Soriano-Maldonado, Alberto; Pérez-López, Isaac José; García-Rodríguez, Inmaculada; Mandic, Sandra; Palomares-Cuadros, Juan; Segura-Jiménez, Víctor; Huertas-Delgado, Francisco Javier

    2017-09-26

    The lack of physical activity and increasing time spent in sedentary behaviours during childhood place importance on developing low cost, easy-toimplement school-based interventions to increase physical activity among children. The PREVIENE Project will evaluate the effectiveness of five innovative, simple, and feasible interventions (active commuting to/from school, active Physical Education lessons, active school recess, sleep health promotion, and an integrated program incorporating all 4 interventions) to improve physical activity, fitness, anthropometry, sleep health, academic achievement, and health-related quality of life in primary school children. A total of 300 children (grade 3; 8-9 years of age) from six schools in Granada (Spain) will be enrolled in one of the 8-week interventions (one intervention per school; 50 children per school) or a control group (no intervention school; 50 children). Outcomes will include physical activity (measured by accelerometry), physical fitness (assessed using the ALPHA fitness battery), and anthropometry (height, weight and waist circumference). Furthermore, they will include sleep health (measured by accelerometers, a sleep diary, and sleep health questionnaires), academic achievement (grades from the official school's records), and health-related quality of life (child and parental questionnaires). To assess the effectiveness of the different interventions on objectively measured PA and the other outcomes, the generalized linear model will be used. The PREVIENE Project will provide the information about the effectiveness and implementation of different school-based interventions for physical activity promotion in primary school children.

  12. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  13. MEASUREMENT OF WIND SPEED FROM COOLING LAKE THERMAL IMAGERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A; Robert Kurzeja, R; Eliel Villa-Aleman, E

    2009-01-20

    The Savannah River National Laboratory (SRNL) collected thermal imagery and ground truth data at two commercial power plant cooling lakes to investigate the applicability of laboratory empirical correlations between surface heat flux and wind speed, and statistics derived from thermal imagery. SRNL demonstrated in a previous paper [1] that a linear relationship exists between the standard deviation of image temperature and surface heat flux. In this paper, SRNL will show that the skewness of the temperature distribution derived from cooling lake thermal images correlates with instantaneous wind speed measured at the same location. SRNL collected thermal imagery, surface meteorology andmore » water temperatures from helicopters and boats at the Comanche Peak and H. B. Robinson nuclear power plant cooling lakes. SRNL found that decreasing skewness correlated with increasing wind speed, as was the case for the laboratory experiments. Simple linear and orthogonal regression models both explained about 50% of the variance in the skewness - wind speed plots. A nonlinear (logistic) regression model produced a better fit to the data, apparently because the thermal convection and resulting skewness are related to wind speed in a highly nonlinear way in nearly calm and in windy conditions.« less

  14. Neuromorphic computing with nanoscale spintronic oscillators

    PubMed Central

    Torrejon, Jacob; Riou, Mathieu; Araujo, Flavio Abreu; Tsunegi, Sumito; Khalsa, Guru; Querlioz, Damien; Bortolotti, Paolo; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Stiles, M. D.; Grollier, Julie

    2017-01-01

    Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information1. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of nanoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals2–5, and several candidates such as memristive6 or superconducting7 oscillators, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator8,9 can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators. PMID:28748930

  15. A policy-capturing study of the simultaneous effects of fit with jobs, groups, and organizations.

    PubMed

    Kristof-Brown, Amy L; Jansen, Karen J; Colbert, Amy E

    2002-10-01

    The authors report an experimental policy-capturing study that examines the simultaneous impact of person-job (PJ), person-group (PG), and person-organization (PO) fit on work satisfaction. Using hierarchical linear modeling, the authors determined that all 3 types of fit had important, independent effects on satisfaction. Work experience explained systematic differences in how participants weighted each type of fit. Multiple interactions also showed participants used complex strategies for combining fit cues.

  16. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  17. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  18. Stress analysis method for clearance-fit joints with bearing-bypass loads

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1989-01-01

    Within a multi-fastener joint, fastener holes may be subjected to the combined effects of bearing loads and loads that bypass the hole to be reacted elsewhere in the joint. The analysis of a joint subjected to search combined bearing and bypass loads is complicated by the usual clearance between the hole and the fastener. A simple analysis method for such clearance-fit joints subjected to bearing-bypass loading has been developed in the present study. It uses an inverse formulation with a linear elastic finite-element analysis. Conditions along the bolt-hole contact arc are specified by displacement constraint equations. The present method is simple to apply and can be implemented with most general purpose finite-element programs since it does not use complicated iterative-incremental procedures. The method was used to study the effects of bearing-bypass loading on bolt-hole contact angles and local stresses. In this study, a rigid, frictionless bolt was used with a plate having the properties of a quasi-isotropic graphite/epoxy laminate. Results showed that the contact angle as well as the peak stresses around the hole and their locations were strongly influenced by the ratio of bearing and bypass loads. For single contact, tension and compression bearing-bypass loading had opposite effects on the contact angle. For some compressive bearing-bypass loads, the hole tended to close on the fastener leading to dual contact. It was shown that dual contact reduces the stress concentration at the fastener and would, therefore, increase joint strength in compression. The results illustrate the general importance of accounting for bolt-hole clearance and contact to accurately compute local bolt-hole stresses for combined bearings and bypass loading.

  19. ACCELERATED FITTING OF STELLAR SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less

  20. Model-Free CUSUM Methods for Person Fit

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Shi, Min

    2009-01-01

    This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…

  1. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  2. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  3. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  4. Semivariogram modeling by weighted least squares

    USGS Publications Warehouse

    Jian, X.; Olea, R.A.; Yu, Y.-S.

    1996-01-01

    Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.

  5. Teaching the Concept of Breakdown Point in Simple Linear Regression.

    ERIC Educational Resources Information Center

    Chan, Wai-Sum

    2001-01-01

    Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…

  6. Analysis technique for controlling system wavefront error with active/adaptive optics

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  7. Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.

    PubMed

    Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z

    2017-03-01

    A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.

  8. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  9. The mechanism of ΔT variation in coupled heat transfer and phase transformation for elastocaloric materials and its application in materials characterization

    NASA Astrophysics Data System (ADS)

    Qian, Suxin; Yuan, Lifen; Yu, Jianlin; Yan, Gang

    2017-11-01

    Elastocaloric cooling serves as a promising environmental friendly candidate with substantial energy saving potential as the next generation cooling technology for air-conditioning, refrigeration, and electronic cooling applications. The temperature change (ΔT) of elastocaloric materials is a direct measure of their elastocaloric effect, which scales proportionally with the device cooling performance based on this phenomenon. Here, the underlying physics between the measured ΔT and the adiabatic temperature span ΔTad is revealed by theoretical investigation of the simplified energy equation describing the coupled simultaneous heat transfer and phase transformation processes. The revealed relation of ΔT depends on a simple and symmetric non-linear function, which requires the introduction of an important dimensionless number Φ, defined as the ratio between convective heat transfer energy and variation of internal energy of the material. The theory was supported by more than 100 data points from the open literature for four different material compositions. Based on the theory, a data sampling and reduction technique was proposed to assist future material characterization studies. Instead of approaching ΔTad by applying an ultrafast strain rate in the old way, the proposed prediction of ΔTad is based on the non-linear least squares fitting method with the measured ΔT dataset at different strain rates within the moderate range. Numerical case studies indicated that the uncertainty associated with the proposed method is within ±1 K if the sampled data satisfied two conditions. In addition, the heat transfer coefficient can be estimated as a by-product of the least squares fitting method proposed in this study.

  10. Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.

    PubMed

    Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles

    2016-02-01

    In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.

  11. Inference of gene regulatory networks from genome-wide knockout fitness data

    PubMed Central

    Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.

    2013-01-01

    Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269

  12. Gauge choice in conformal gravity

    NASA Astrophysics Data System (ADS)

    Sultana, Joseph; Kazanas, Demosthenes

    2017-04-01

    In a recent paper, K. Horne examined the effect of a conformally coupled scalar field (referred to as Higgs field) on the Mannheim-Kazanas metric gμν, I.e. the static spherically symmetric metric within the context of conformal gravity, and studied its effect on the rotation curves of galaxies. He showed that for a Higgs field of the form S(r) = S0a/(r + a), where a is a radial length-scale, the equivalent Higgs-frame Mannheim-Kazanas metric \\tilde{g}_{μ ν } = Ω ^2 g_{μ ν }, with Ω = S(r)/S0, lacks the linear γr term, which has been employed in the fitting of the galactic rotation curves without the need to invoke dark matter. In this brief note, we point out that the representation of the Mannheim-Kazanas metric in a gauge, where it lacks the linear term, has already been presented by others, including Mannheim and Kazanas themselves, without the need to introduce a conformally coupled Higgs field. Furthermore, Horne argues that the absence of the linear term resolves the issue of light bending in the wrong direction, I.e. away from the gravitating mass, if γr > 0 in the Mannheim-Kazanas metric, a condition necessary to resolve the galactic dynamics in the absence of dark matter. In this case, we also point out that the elimination of the linear term is not even required because the sign of the γr term in the metric can be easily reversed by a simple gauge transformation, and also that the effects of this term are indeed too small to be observed.

  13. Gauge Choice in Conformal Gravity

    NASA Technical Reports Server (NTRS)

    Sultana, Joseph; Kazanas, Demosthenes

    2017-01-01

    In a recent paper, K. Horne examined the effect of a conformally coupled scalar field (referred to as Higgs field) on the Mannheim-Kazanas index lowering operator, i.e. the static spherically symmetric metric within the context of conformal gravity, and studied its effect on the rotation curves of galaxies. He showed that for a Higgs field of the form S(r) = S0a/(r + a), where a is a radial length-scale, the equivalent Higgs-frame Mannheim-Kazanas index lowering operator=Omega(sup 2)index lowering operator, with Omega = S(r)/S0, lacks the linear gamma r term, which has been employed in the fitting of the galactic rotation curves without the need to invoke dark matter. In this brief note, we point out that the representation of the Mannheim-Kazanas metric in a gauge, where it lacks the linear term, has already been presented by others, including Mannheim and Kazanas themselves, without the need to introduce a conformally coupled Higgs field. Furthermore, Horne argues that the absence of the linear term resolves the issue of light bending in the wrong direction, i.e. away from the gravitating mass, if gamma r is greater than 0 in the Mannheim-Kazanas metric, a condition necessary to resolve the galactic dynamics in the absence of dark matter. In this case, we also point out that the elimination of the linear term is not even required because the sign of the gamma r term in the metric can be easily reversed by a simple gauge transformation, and also that the effects of this term are indeed too small to be observed.

  14. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  15. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  17. An approach to predict the shape-memory behavior of amorphous polymers from Dynamic Mechanical Analysis (DMA) data

    NASA Astrophysics Data System (ADS)

    Kuki, Ákos; Czifrák, Katalin; Karger-Kocsis, József; Zsuga, Miklós; Kéki, Sándor

    2015-02-01

    The prediction of shape-memory behavior is essential regarding the design of a smart material for different applications. This paper proposes a simple and quick method for the prediction of shape-memory behavior of amorphous shape memory polymers (SMPs) on the basis of a single dynamic mechanical analysis (DMA) temperature sweep at constant frequency. All the parameters of the constitutive equations for linear viscoelasticity are obtained by fitting the DMA curves. The change with the temperature of the time-temperature superposition shift factor ( a T ) is expressed by the Williams-Landel-Ferry (WLF) model near and above the glass transition temperature ( T g ), and by the Arrhenius law below T g . The constants of the WLF and Arrhenius equations can also be determined. The results of our calculations agree satisfactorily with the experimental free recovery curves from shape-memory tests.

  18. Scattering of 42-MeV alpha particles from Cu-65

    NASA Technical Reports Server (NTRS)

    Stewart, W. M.; Seth, K. K.

    1972-01-01

    The extended particle-core coupling model was used to predict the properties of low-lying levels of Cu-65. A 42-MeV alpha particle cyclotron beam was used for the experiment. The experiment included magnetic analysis of the incident beam and particle detection by lithium-drifted silicon semiconductors. Angular distributions were measured for 10 to 50 degrees in the center of mass system. Data was reduced by fitting the peaks with a skewed Gaussian function using a least squares computer program with a linear background search. The energy calibration of each system was done by pulsar, and the excitation energies are accurate to + or - 25 keV. The simple weak coupling model cannot account for the experimentally observed quantities of the low-lying levels of Cu-65. The extended particle-core calculation showed that the coupling is not weak and that considerable configuration mixing of the low-lying states results.

  19. Constraining heating processes in the solar wind with kinetic properties of heavy ions

    NASA Astrophysics Data System (ADS)

    Kasper, J. C.; Tracy, P.; Zurbuchen, T.; Raines, J. M.; Gilbert, J. A.; Shearer, P.

    2016-12-01

    Heavy ion components (A > 4 amu) in collisionally young solar wind plasma show a clear, stable dependence of temperature on mass, probably reflecting the conditions in the solar corona. Using results from the Solar Wind Ion Composition Spectrometer (SWICS) onboard the Advanced Composition Explorer (ACE), we find that the heavy ion temperatures are well organized by a simple linear fit of the form Ti/Tp=(1.35+/- .02) mi/mp. Most importantly we find that the current model predictions based on turbulent transport and kinetic dissipation are in agreement with observed nonthermal heating in intermediate collisional age plasma for m/q < 3.5 amu/e, but are not in quantitative or qualitative agreement with the lowest collisional age results. These dependencies provide new constraints on the physics of ion heating in multispecies plasma, along with predictions to be tested by the upcoming Solar Probe Plus and Solar Orbiter missions to the near-Sun environment.

  20. Detection of Scopolamine Hydrobromide via Surface-enhanced Raman Spectroscopy.

    PubMed

    Bao, Lin; Sha, Xuan-Yu; Zhao, Hang; Han, Si-Qin-Gao-Wa; Hasi, Wu-Li-Ji

    2017-01-01

    Surface-enhanced Raman spectroscopy (SERS) was used to measure scopolamine hydrobromide. First, the Raman characteristic peaks of scopolamine hydrobromide were assigned, and the characteristic peaks were determined. The optimal aggregation agent was potassium iodide based on a comparative experimental study. Finally, the SERS spectrum of scopolamine hydrobromide was detected in aqueous solution, and the semi-quantitative analysis and the recovery rate were determined via a linear fitting. The detection limit of scopolamine hydrobromide in aqueous solution was 0.5 μg/mL. From 0 - 10 μg/mL, the curve of the intensity of the Raman characteristic peak of scopolamine hydrobromide at 1002 cm -1 is y = 4017.76 + 642.47x. The correlation coefficient was R 2 = 0.983, the recovery was 98.5 - 109.7%, and the relative standard deviation (RSD) was about 5.5%. This method is fast, accurate, non-destructive and simple for the detection of scopolamine hydrobromide.

  1. Thrust Measurements in Ballistic Pendulum Ablative Laser Propulsion Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazolin, H.; Rodrigues, N. A. S.; Minucci, M. A. S.

    This paper describes a setup for thrust measurement in ablative laser propulsion experiments, based on a simple ballistic pendulum associated to an imaging system, which is being assembled at IEAv. A light aluminium pendulum holding samples is placed inside a 100 liters vacuum chamber with two optical windows: the first (in ZnSe) for the laser beam and the second (in fused quartz) for the pendulum visualization. A TEA-CO{sub 2} laser beam is focused to the samples providing ablation and transferring linear moment to the pendulum as a whole. A CCD video camera captures the oscillatory movement of the pendulum andmore » the its trajectory is obtained by image processing. By fitting the trajectory of the pendulum to a dumped sinusoidal curve is possible to obtain the amplitude of the movement which is directly related to the momentum transfered to the sample.« less

  2. Tunneling of heat: Beyond linear response regime

    NASA Astrophysics Data System (ADS)

    Walczak, Kamil; Saroka, David

    2018-02-01

    We examine nanoscale processes of heat (energy) transfer as carried by electrons tunneling via potential barriers and molecular interconnects between two heat reservoirs (thermal baths). For that purpose, we use Landauer-type formulas to calculate thermal conductance and quadratic correction to heat flux flowing via quantum systems. As an input, we implement analytical expressions for transmission functions related to simple potential barriers and atomic bridges. Our results are discussed with respect to energy of tunneling electrons, temperature, the presence of resonant states, and specific parameters characterizing potential barriers as well as heat carriers. The simplicity of semi-analytical models developed by us allows to fit experimental data and extract crucial information about the values of model parameters. Further investigations are expected for more realistic transmission functions, while time-dependent aspects of nanoscale heat transfer may be addressed by using the concept of wave packets scattered on potential barriers and point-like defects within regular (periodic) nanostructures.

  3. Assessment of ALEGRA Computation for Magnetostatic Configurations

    DOE PAGES

    Grinfeld, Michael; Niederhaus, John Henry; Porwitzky, Andrew

    2016-03-01

    Here, a closed-form solution is described here for the equilibrium configurations of the magnetic field in a simple heterogeneous domain. This problem and its solution are used for rigorous assessment of the accuracy of the ALEGRA code in the quasistatic limit. By the equilibrium configuration we understand the static condition, or the stationary states without macroscopic current. The analysis includes quite a general class of 2D solutions for which a linear isotropic metallic matrix is placed inside a stationary magnetic field approaching a constant value H i° at infinity. The process of evolution of the magnetic fields inside and outsidemore » the inclusion and the parameters for which the quasi-static approach provides for self-consistent results is also explored. Lastly, it is demonstrated that under spatial mesh refinement, ALEGRA converges to the analytic solution for the interior of the inclusion at the expected rate, for both body-fitted and regular rectangular meshes.« less

  4. Local magnetic fields, uplift, gravity, and dilational strain changes in Southern California ( USA).

    USGS Publications Warehouse

    Johnston, M.J.S.

    1986-01-01

    Measurements of regional magnetic field near the San Andreas fault at Cajon, Palmdale and Tejon are strongly correlated with changes in gravity, areal strain, and uplift in these regions during the period 1977-1984. Because the inferred relationships between these parameters are in approximate agreement with those obtained from simple deformation models, the preferred explanation appeals to short-term strain episodes independently detected in each data set. Transfer functions from magnetic to strain, gravity, and uplift perturbations, obtained by least-square linear fits to the data, are -0.98 nT/ppm, -0.03 nT/mu Gal, and 9.1 nT/m respectively. Tectonomagnetic model calculations underestimate the observed changes and those reported previously for dam loading and volcano-magnetic observations. A less likely alternative explanation of the observed data appeals to a common source of meteorologically generated crustal or instrumental noise in the strain, gravity, magnetic, and uplift data.-from Author

  5. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  6. Mutation-selection equilibrium in games with multiple strategies.

    PubMed

    Antal, Tibor; Traulsen, Arne; Ohtsuki, Hisashi; Tarnita, Corina E; Nowak, Martin A

    2009-06-21

    In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among n strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/n, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of nxn games in the limit of weak selection.

  7. Reference-free fatigue crack detection using nonlinear ultrasonic modulation under various temperature and loading conditions

    NASA Astrophysics Data System (ADS)

    Lim, Hyung Jin; Sohn, Hoon; DeSimio, Martin P.; Brown, Kevin

    2014-04-01

    This study presents a reference-free fatigue crack detection technique using nonlinear ultrasonic modulation. When low frequency (LF) and high frequency (HF) inputs generated by two surface-mounted lead zirconate titanate (PZT) transducers are applied to a structure, the presence of a fatigue crack can provide a mechanism for nonlinear ultrasonic modulation and create spectral sidebands around the frequency of the HF signal. The crack-induced spectral sidebands are isolated using a combination of linear response subtraction (LRS), synchronous demodulation (SD) and continuous wavelet transform (CWT) filtering. Then, a sequential outlier analysis is performed on the extracted sidebands to identify the crack presence without referring any baseline data obtained from the intact condition of the structure. Finally, the robustness of the proposed technique is demonstrated using actual test data obtained from simple aluminum plate and complex aircraft fitting-lug specimens under varying temperature and loading variations.

  8. Shock loading predictions from application of indicial theory to shock-turbulence interactions

    NASA Technical Reports Server (NTRS)

    Keefe, Laurence R.; Nixon, David

    1991-01-01

    A sequence of steps that permits prediction of some of the characteristics of the pressure field beneath a fluctuating shock wave from knowledge of the oncoming turbulent boundary layer is presented. The theory first predicts the power spectrum and pdf of the position and velocity of the shock wave, which are then used to obtain the shock frequency distribution, and the pdf of the pressure field, as a function of position within the interaction region. To test the validity of the crucial assumption of linearity, the indicial response of a normal shock is calculated from numerical simulation. This indicial response, after being fit by a simple relaxation model, is used to predict the shock position and velocity spectra, along with the shock passage frequency distribution. The low frequency portion of the shock spectra, where most of the energy is concentrated, is satisfactorily predicted by this method.

  9. Porous three-dimensional graphene foam/Prussian blue composite for efficient removal of radioactive 137Cs

    PubMed Central

    Jang, Sung-Chan; Haldorai, Yuvaraj; Lee, Go-Woon; Hwang, Seung-Kyu; Han, Young-Kyu; Roh, Changhyun; Huh, Yun Suk

    2015-01-01

    In this study, a simple one-step hydrothermal reaction is developed to prepare composite based on Prussian blue (PB)/reduced graphene oxide foam (RGOF) for efficient removal of radioactive cesium (137Cs) from contaminated water. Scanning electron microscopy and transmission electron microscopy show that cubic PB nanoparticles are decorated on the RGO surface. Owing to the combined benefits of RGOF and PB, the composite shows excellent removal efficiency (99.5%) of 137Cs from the contaminated water. The maximum adsorption capacity is calculated to be 18.67 mg/g. An adsorption isotherm fit-well the Langmuir model with a linear regression correlation value of 0.97. This type of composite is believed to hold great promise for the clean-up of 137Cs from contaminated water around nuclear plants and/or after nuclear accidents. PMID:26670798

  10. Theoretical Calculation of the Power Spectra of the Rolling and Yawing Moments on a Wing in Random Turbulence

    NASA Technical Reports Server (NTRS)

    Eggleston, John M; Diederich, Franklin W

    1957-01-01

    The correlation functions and power spectra of the rolling and yawing moments on an airplane wing due to the three components of continuous random turbulence are calculated. The rolling moments to the longitudinal (horizontal) and normal (vertical) components depend on the spanwise distributions of instantaneous gust intensity, which are taken into account by using the inherent properties of symmetry of isotropic turbulence. The results consist of expressions for correlation functions or spectra of the rolling moment in terms of the point correlation functions of the two components of turbulence. Specific numerical calculations are made for a pair of correlation functions given by simple analytic expressions which fit available experimental data quite well. Calculations are made for four lift distributions. Comparison is made with the results of previous analyses which assumed random turbulence along the flight path and linear variations of gust velocity across the span.

  11. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  12. Rhenium-osmium concentration and isotope systematics in group IIAB iron meteorites

    USGS Publications Warehouse

    Morgan, J.W.; Horan, M.F.; Walker, R.J.; Grossman, J.N.

    1995-01-01

    Rhenium and osmium abundances, and osmium isotopic compositions were measured by negative thermal ionization mass spectrometry in thirty samples, including replicates, of five IIA and eight IIB iron meteorites. Log plots of Os vs. Re abundances for IIA and IIB irons describe straight lines that approximately converge on Lombard, which has the lowest Re and Os abundances and highest 187Re/188Os measured in a IIA iron to date. The linear IIA trend may be exactly reproduced by fractional crystallization, but is not well fitted using variable partition coefficients. The IIB iron trend, however, cannot be entirely explained by simple fractional crystallization. One explanation is that small amounts of Re and Os were added to the asteroid core during the final stages of crystallization. Another possibility is that diffusional enrichment of Os may have occurred in samples most depleted in Re and Os. -from Authors

  13. Surgery for left ventricular aneurysm: early and late survival after simple linear repair and endoventricular patch plasty.

    PubMed

    Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig

    2004-09-01

    Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.

  14. Association between aerobic fitness and cerebrovascular function with neurocognitive functions in healthy, young adults.

    PubMed

    Hwang, Jungyun; Kim, Kiyoung; Brothers, R Matthew; Castelli, Darla M; Gonzalez-Lima, F

    2018-05-01

    Studies of the effects of physical activity on cognition suggest that aerobic fitness can improve cognitive abilities. However, the physiological mechanisms for the cognitive benefit of aerobic fitness are less well understood. We examined the association between aerobic fitness and cerebrovascular function with neurocognitive functions in healthy, young adults. Participants aged 18-29 years underwent measurements of cerebral vasomotor reactivity (CVMR) in response to rebreathing-induced hypercapnia, maximal oxygen uptake (VO 2 max) during cycle ergometry to voluntary exhaustion, and simple- and complex-neurocognitive assessments at rest. Ten subjects were identified as having low-aerobic fitness (LF < 15th fitness percentile), and twelve subjects were identified as having high-aerobic fitness (HF > 80th fitness percentile). There were no LF versus HF group differences in cerebrovascular hemodynamics during the baseline condition. Changes in middle cerebral artery blood velocity and CVMR during hypercapnia were elevated more in the HF than the LF group. Compared to the LF, the HF performed better on a complex-cognitive task assessing fluid reasoning, but not on simple attentional abilities. Statistical modeling showed that measures of VO 2 max, CVMR, and fluid reasoning were positively inter-correlated. The relationship between VO 2 max and fluid reasoning, however, did not appear to be reliably mediated by CVMR. In conclusion, a high capacity for maximal oxygen uptake among healthy, young adults was associated with greater CVMR and better fluid reasoning, implying that high-aerobic fitness may promote cerebrovascular and cognitive functioning abilities.

  15. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  16. Mixture Model for Determination of Shock Equation of State

    DTIC Science & Technology

    2012-07-25

    not considered in this paper. III. COMPARISON WITH EXPERIMENTAL DATA A. Two-constituent composites 1. Uranium- rhodium composite Uranium- rhodium (U...sound speed, Co, and S were determined from linear least squares fit to the available data22 as shown in Figs. 1(a) and 1(b) for uranium and rhodium ...overpredicts the experimental data, with an average deviation, dUs,/Us of 0.05, shown in Fig. 2(b). The linear fits for uranium and rhodium are shown for

  17. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  18. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  19. Modeling of boldine alkaloid adsorption onto pure and propyl-sulfonic acid-modified mesoporous silicas. A comparative study.

    PubMed

    Geszke-Moritz, Małgorzata; Moritz, Michał

    2016-12-01

    The present study deals with the adsorption of boldine onto pure and propyl-sulfonic acid-functionalized SBA-15, SBA-16 and mesocellular foam (MCF) materials. Siliceous adsorbents were characterized by nitrogen sorption analysis, transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier-transform infrared (FT-IR) spectroscopy and thermogravimetric analysis. The equilibrium adsorption data were analyzed using the Langmuir, Freundlich, Redlich-Peterson, and Temkin isotherms. Moreover, the Dubinin-Radushkevich and Dubinin-Astakhov isotherm models based on the Polanyi adsorption potential were employed. The latter was calculated using two alternative formulas including solubility-normalized (S-model) and empirical C-model. In order to find the best-fit isotherm, both linear regression and nonlinear fitting analysis were carried out. The Dubinin-Astakhov (S-model) isotherm revealed the best fit to the experimental points for adsorption of boldine onto pure mesoporous materials using both linear and nonlinear fitting analysis. Meanwhile, the process of boldine sorption onto modified silicas was described the best by the Langmuir and Temkin isotherms using linear regression and nonlinear fitting analysis, respectively. The values of adsorption energy (below 8kJ/mol) indicate the physical nature of boldine adsorption onto unmodified silicas whereas the ionic interactions seem to be the main force of alkaloid adsorption onto functionalized sorbents (energy of adsorption above 8kJ/mol). Copyright © 2016 Elsevier B.V. All rights reserved.

  20. SU-F-T-130: [18F]-FDG Uptake Dose Response in Lung Correlates Linearly with Proton Therapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D; Titt, U; Mirkovic, D

    2016-06-15

    Purpose: Analysis of clinical outcomes in lung cancer patients treated with protons using 18F-FDG uptake in lung as a measure of dose response. Methods: A test case lung cancer patient was selected in an unbiased way. The test patient’s treatment planning and post treatment positron emission tomography (PET) were collected from picture archiving and communication system at the UT M.D. Anderson Cancer Center. Average computerized tomography scan was registered with post PET/CT through both rigid and deformable registrations for selected region of interest (ROI) via VelocityAI imaging informatics software. For the voxels in the ROI, a system that extracts themore » Standard Uptake Value (SUV) from PET was developed, and the corresponding relative biological effectiveness (RBE) weighted (both variable and constant) dose was computed using the Monte Carlo (MC) methods. The treatment planning system (TPS) dose was also obtained. Using histogram analysis, the voxel average normalized SUV vs. 3 different doses was obtained and linear regression fit was performed. Results: From the registration process, there were some regions that showed significant artifacts near the diaphragm and heart region, which yielded poor r-squared values when the linear regression fit was performed on normalized SUV vs. dose. Excluding these values, TPS fit yielded mean r-squared value of 0.79 (range 0.61–0.95), constant RBE fit yielded 0.79 (range 0.52–0.94), and variable RBE fit yielded 0.80 (range 0.52–0.94). Conclusion: A system that extracts SUV from PET to correlate between normalized SUV and various dose calculations was developed. A linear relation between normalized SUV and all three different doses was found.« less

  1. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  2. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  3. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  4. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  5. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  6. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  7. An Inquiry-Based Linear Algebra Class

    ERIC Educational Resources Information Center

    Wang, Haohao; Posey, Lisa

    2011-01-01

    Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…

  8. High-reliability release mechanism

    NASA Technical Reports Server (NTRS)

    Paradise, J. J.

    1971-01-01

    Release mechanism employing simple clevis fitting in combination with two pin-pullers achieves high reliability degree through active mechanical redundancy. Mechanism releases solar arrays. It is simple and inexpensive and performs effectively. It adapts to other release-system applications with variety of pin-puller devices.

  9. Anthropometric measures as fitness indicators in primary school children: The Health Oriented Pedagogical Project (HOPP).

    PubMed

    Mamen, Asgeir; Fredriksen, Per Morten

    2018-05-01

    As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.

  10. SU-G-TeP1-02: Analytical Stopping Power and Range Parameterization for Therapeutic Energy Intervals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donahue, W; Newhauser, W; Mary Bird Perkins Cancer Center, Baton Rouge, LA

    Purpose: To develop a simple, analytic parameterization of stopping power and range, which covers a wide energy interval and is applicable to many species of projectile ions and target materials, with less than 15% disagreement in linear stopping power and 1 mm in range. Methods: The new parameterization was required to be analytically integrable from stopping power to range, and continuous across the range interval of 1 µm to 50 cm. The model parameters were determined from stopping power and range data for hydrogen, carbon, iron, and uranium ions incident on water, carbon, aluminum, lead and copper. Stopping power andmore » range data was taken from SRIM. A stochastic minimization algorithm was used to find model parameters, with 10 data points per energy decade. Additionally, fitting was performed with 2 and 26 data points per energy decade to test the model’s robustness to sparse Results: 6 free parameters were sufficient to cover the therapeutic energy range for each projectile ion species (e.g. 1 keV – 300 MeV for protons). The model agrees with stopping power and range data well, with less than 9% relative stopping power difference and 0.5 mm difference in range. As few as, 4 bins per decade were required to achieve comparable fitting results to the full data set. Conclusion: This study successfully demonstrated that a simple analytic function can be used to fit the entire energy interval of therapeutic ion beams of hydrogen and heavier elements. Advantages of this model were the small number (6) of free parameters, and that the model calculates both stopping power and range. Applications of this model include GPU-based dose calculation algorithms and Monte Carlo simulations, where available memory is limited. This work was supported in part by a research agreement between United States Naval Academy and Louisiana State University: Contract No N00189-13-P-0786. In addition, this work was accepted for presentation at the American Nuclear Society Annual Meeting 2016.« less

  11. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  12. Separation of detector non-linearity issues and multiple ionization satellites in alpha-particle PIXE

    NASA Astrophysics Data System (ADS)

    Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.

    2018-01-01

    Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.

  13. Symmetric co-movement between Malaysia and Japan stock markets

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2017-04-01

    The copula approach is a flexible tool known to capture linear, nonlinear, symmetric and asymmetric dependence between two or more random variables. It is often used as a co-movement measure between stock market returns. The information obtained from copulas such as the level of association of financial market during normal and bullish and bearish markets phases are useful for investment strategies and risk management. However, the study of co-movement between Malaysia and Japan markets are limited, especially using copulas. Hence, we aim to investigate the dependence structure between Malaysia and Japan capital markets for the period spanning from 2000 to 2012. In this study, we showed that the bivariate normal distribution is not suitable as the bivariate distribution or to present the dependence between Malaysia and Japan markets. Instead, Gaussian or normal copula was found a good fit to represent the dependence. From our findings, it can be concluded that simple distribution fitting such as bivariate normal distribution does not suit financial time series data, whose characteristics are often leptokurtic. The nature of the data is treated by ARMA-GARCH with heavy tail distributions and these can be associated with copula functions. Regarding the dependence structure between Malaysia and Japan markets, the findings suggest that both markets co-move concurrently during normal periods.

  14. Methodical fitting for mathematical models of rubber-like materials

    NASA Astrophysics Data System (ADS)

    Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne

    2017-02-01

    A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.

  15. Mining The Sdss-moc Database For Main-belt Asteroid Solar Phase Behavior.

    NASA Astrophysics Data System (ADS)

    Truong, Thien-Tin; Hicks, M. D.

    2010-10-01

    The 4th Release of the Sloan Digital Sky Survey Moving Object Catalog (SDSS-MOC) contains 471569 moving object detections from 519 observing runs obtained up to March 2007. Of these, 220101 observations were linked with 104449 known small bodies, with 2150 asteroids sampled at least 10 times. It is our goal to mine this database in order to extract solar phase curve information for a large number of main-belt asteroids of different dynamical and taxonomic classes. We found that a simple linear phase curve fit allowed us to reject data contaminated by intrinsic rotational lightcurves and other effects. As expected, a running mean of solar phase coefficient is strongly correlated with orbital elements, with the inner main-belt dominated by bright S-type asteroids and transitioning to darker C and D-type asteroids with steeper solar phase slopes. We shall fit the empirical H-G model to our 2150 multi-sampled asteroids and correlate these parameters with spectral type derived from the SDSS colors and position within the asteroid belt. Our data should also allow us to constrain solar phase reddening for a variety of taxonomic classes. We shall discuss errors induced by the standard "g=0.15" assumption made in absolute magnitude determination, which may slightly affect number-size distribution models.

  16. Analysis of clinically important factors on the performance of advanced hydraulic, microprocessor-controlled exo-prosthetic knee joints based on 899 trial fittings

    PubMed Central

    Hahn, Andreas; Lang, Michael; Stuckart, Claudia

    2016-01-01

    Abstract The objective of this work is to evaluate whether clinically important factors may predict an individual's capability to utilize the functional benefits provided by an advanced hydraulic, microprocessor-controlled exo-prosthetic knee component. This retrospective cross-sectional cohort analysis investigated the data of above knee amputees captured during routine trial fittings. Prosthetists rated the performance indicators showing the functional benefits of the advanced maneuvering capabilities of the device. Subjects were asked to rate their perception. Simple and multiple linear and logistic regression was applied. Data from 899 subjects with demographics typical for the population were evaluated. Ability to vary gait speed, perform toileting, and ascend stairs were identified as the most sensitive performance predictors. Prior C-Leg users showed benefits during advanced maneuvering. Variables showed plausible and meaningful effects, however, could not claim predictive power. Mobility grade showed the largest effect but also failed to be predictive. Clinical parameters such as etiology, age, mobility grade, and others analyzed here do not suffice to predict individual potential. Daily walking distance may pose a threshold value and be part of a predictive instrument. Decisions based solely on single parameters such as mobility grade rating or walking distance seem to be questionable. PMID:27828871

  17. Analysis of clinically important factors on the performance of advanced hydraulic, microprocessor-controlled exo-prosthetic knee joints based on 899 trial fittings.

    PubMed

    Hahn, Andreas; Lang, Michael; Stuckart, Claudia

    2016-11-01

    The objective of this work is to evaluate whether clinically important factors may predict an individual's capability to utilize the functional benefits provided by an advanced hydraulic, microprocessor-controlled exo-prosthetic knee component.This retrospective cross-sectional cohort analysis investigated the data of above knee amputees captured during routine trial fittings. Prosthetists rated the performance indicators showing the functional benefits of the advanced maneuvering capabilities of the device. Subjects were asked to rate their perception. Simple and multiple linear and logistic regression was applied.Data from 899 subjects with demographics typical for the population were evaluated. Ability to vary gait speed, perform toileting, and ascend stairs were identified as the most sensitive performance predictors. Prior C-Leg users showed benefits during advanced maneuvering. Variables showed plausible and meaningful effects, however, could not claim predictive power. Mobility grade showed the largest effect but also failed to be predictive.Clinical parameters such as etiology, age, mobility grade, and others analyzed here do not suffice to predict individual potential. Daily walking distance may pose a threshold value and be part of a predictive instrument. Decisions based solely on single parameters such as mobility grade rating or walking distance seem to be questionable.

  18. The European Southern Observatory-MIDAS table file system

    NASA Technical Reports Server (NTRS)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  19. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  20. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  1. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  2. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  3. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  4. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    PubMed

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  5. Predictability of a Coupled Model of ENSO Using Singular Vector Analysis: Optimal Growth and Forecast Skill.

    NASA Astrophysics Data System (ADS)

    Xue, Yan

    The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.

  6. A Combined SRTM Digital Elevation Model for Zanjan State of Iran Based on the Corrective Surface Idea

    NASA Astrophysics Data System (ADS)

    Kiamehr, Ramin

    2016-04-01

    One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.

  7. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.

  8. A Simple Model for Fine Structure Transitions in Alkali-Metal Noble-Gas Collisions

    DTIC Science & Technology

    2015-03-01

    63 33 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for KHe, KNe, and KAr...64 ix Figure Page 34 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for RbHe, RbNe, and...RbAr . . . . . . . . . . . . . . . . . . . . . . . . . 64 35 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for CsHe, CsNe, and CsAr

  9. The Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT): An Open-source Tool for Efficient Fitting of Astrometric and Radial Velocity Data

    NASA Astrophysics Data System (ADS)

    Mede, Kyle; Brandt, Timothy D.

    2017-03-01

    We present the Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT), a new, open-source suite to fit the orbital elements of planetary or stellar-mass companions to any combination of radial velocity and astrometric data. To explore the parameter space of Keplerian models, ExoSOFT may be operated with its own multistage sampling approach or interfaced with third-party tools such as emcee. In addition, ExoSOFT is packaged with a collection of post-processing tools to analyze and summarize the results. Although only a few systems have been observed with both radial velocity and direct imaging techniques, this number will increase, thanks to upcoming spacecraft and ground-based surveys. Providing both forms of data enables simultaneous fitting that can help break degeneracies in the orbital elements that arise when only one data type is available. The dynamical mass estimates this approach can produce are important when investigating the formation mechanisms and subsequent evolution of substellar companions. ExoSOFT was verified through fitting to artificial data and was implemented using the Python and Cython programming languages; it is available for public download at https://github.com/kylemede/ExoSOFT under GNU General Public License v3.

  10. SU-G-201-15: Nomogram as an Efficient Dosimetric Verification Tool in HDR Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, J; Todor, D

    Purpose: Nomogram as a simple QA tool for HDR prostate brachytherapy treatment planning has been developed and validated clinically. Reproducibility including patient-to-patient and physician-to-physician variability was assessed. Methods: The study was performed on HDR prostate implants from physician A (n=34) and B (n=15) using different implant techniques and planning methodologies. A nomogram was implemented as an independent QA of computer-based treatment planning before plan execution. Normalized implant strength (total air kerma strength Sk*t in cGy cm{sup 2} divided by prescribed dose in cGy) was plotted as a function of PTV volume and total V100. A quadratic equation was used tomore » fit the data with R{sup 2} denoting the model predictive power. Results: All plans showed good target coverage while OARs met the dose constraint guidelines. Vastly different implant and planning styles were reflected on conformity index (entire dose matrix V100/PTV volume, physician A implants: 1.27±0.14, physician B: 1.47±0.17) and PTV V150/PTV volume ratio (physician A: 0.34±0.09, physician B: 0.24±0.07). The quadratic model provided a better fit for the curved relationship between normalized implant strength and total V100 (or PTV volume) than a simple linear function. Unlike the normalized implant strength versus PTV volume nomogram which differed between physicians, a unique quadratic model based nomogram (Sk*t)/D=−0.0008V2+0.0542V+1.1185 (R{sup 2}=0.9977) described the dependence of normalized implant strength on total V100 over all the patients from both physicians despite two different implant and planning philosophies. Normalized implant strength - total V100 model also generated less deviant points distorting the smoothed ones with a significantly higher correlation. Conclusion: A simple and universal, excel-based nomogram was created as an independent calculation tool for HDR prostate brachytherapy. Unlike similar attempts, our nomogram is insensitive to implant style and does not rely on reproducing dose calculations using TG-43 formalism, thus making it a truly independent check.« less

  11. Quantitative analysis of Ni2+/Ni3+ in Li[NixMnyCoz]O2 cathode materials: Non-linear least-squares fitting of XPS spectra

    NASA Astrophysics Data System (ADS)

    Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng

    2018-05-01

    Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.

  12. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  13. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  14. Temperature dependence of elastic and strength properties of T300/5208 graphite-epoxy

    NASA Technical Reports Server (NTRS)

    Milkovich, S. M.; Herakovich, C. T.

    1984-01-01

    Experimental results are presented for the elastic and strength properties of T300/5208 graphite-epoxy at room temperature, 116K (-250 F), and 394K (+250 F). Results are presented for unidirectional 0, 90, and 45 degree laminates, and + or - 30, + or - 45, and + or - 60 degree angle-ply laminates. The stress-strain behavior of the 0 and 90 degree laminates is essentially linear for all three temperatures and that the stress-strain behavior of all other laminates is linear at 116K. A second-order curve provides the best fit for the temperature is linear at 116K. A second-order curve provides the best fit for the temperature dependence of the elastic modulus of all laminates and for the principal shear modulus. Poisson's ratio appears to vary linearly with temperature. all moduli decrease with increasing temperature except for E (sub 1) which exhibits a small increase. The strength temperature dependence is also quadratic for all laminates except the 0 degree - laminate which exhibits linear temperature dependence. In many cases the temperature dependence of properties is nearly linear.

  15. Differential gene expression detection and sample classification using penalized linear regression models.

    PubMed

    Wu, Baolin

    2006-02-15

    Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.

  16. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  17. Does childhood motor skill proficiency predict adolescent fitness?

    PubMed

    Barnett, Lisa M; Van Beurden, Eric; Morgan, Philip J; Brooks, Lyndon O; Beard, John R

    2008-12-01

    To determine whether childhood fundamental motor skill proficiency predicts subsequent adolescent cardiorespiratory fitness. In 2000, children's proficiency in a battery of skills was assessed as part of an elementary school-based intervention. Participants were followed up during 2006/2007 as part of the Physical Activity and Skills Study, and cardiorespiratory fitness was measured using the Multistage Fitness Test. Linear regression was used to examine the relationship between childhood fundamental motor skill proficiency and adolescent cardiorespiratory fitness controlling for gender. Composite object control (kick, catch, throw) and locomotor skill (hop, side gallop, vertical jump) were constructed for analysis. A separate linear regression examined the ability of the sprint run to predict cardiorespiratory fitness. Of the 928 original intervention participants, 481 were in 28 schools, 276 (57%) of whom were assessed. Two hundred and forty-four students (88.4%) completed the fitness test. One hundred and twenty-seven were females (52.1%), 60.1% of whom were in grade 10 and 39.0% were in grade 11. As children, almost all 244 completed each motor assessments, except for the sprint run (n = 154, 55.8%). The mean composite skill score in 2000 was 17.7 (SD 5.1). In 2006/2007, the mean number of laps on the Multistage Fitness Test was 50.5 (SD 24.4). Object control proficiency in childhood, adjusting for gender (P = 0.000), was associated with adolescent cardiorespiratory fitness (P = 0.012), accounting for 26% of fitness variation. Children with good object control skills are more likely to become fit adolescents. Fundamental motor skill development in childhood may be an important component of interventions aiming to promote long-term fitness.

  18. Comparison of the A-Cc curve fitting methods in determining maximum ribulose 1.5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration.

    PubMed

    Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei

    2009-02-01

    A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.

  19. Bayesian inference of selection in a heterogeneous environment from genetic time-series data.

    PubMed

    Gompert, Zachariah

    2016-01-01

    Evolutionary geneticists have sought to characterize the causes and molecular targets of selection in natural populations for many years. Although this research programme has been somewhat successful, most statistical methods employed were designed to detect consistent, weak to moderate selection. In contrast, phenotypic studies in nature show that selection varies in time and that individual bouts of selection can be strong. Measurements of the genomic consequences of such fluctuating selection could help test and refine hypotheses concerning the causes of ecological specialization and the maintenance of genetic variation in populations. Herein, I proposed a Bayesian nonhomogeneous hidden Markov model to estimate effective population sizes and quantify variable selection in heterogeneous environments from genetic time-series data. The model is described and then evaluated using a series of simulated data, including cases where selection occurs on a trait with a simple or polygenic molecular basis. The proposed method accurately distinguished neutral loci from non-neutral loci under strong selection, but not from those under weak selection. Selection coefficients were accurately estimated when selection was constant or when the fitness values of genotypes varied linearly with the environment, but these estimates were less accurate when fitness was polygenic or the relationship between the environment and the fitness of genotypes was nonlinear. Past studies of temporal evolutionary dynamics in laboratory populations have been remarkably successful. The proposed method makes similar analyses of genetic time-series data from natural populations more feasible and thereby could help answer fundamental questions about the causes and consequences of evolution in the wild. © 2015 John Wiley & Sons Ltd.

  20. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  1. Comparative evaluation of human heat stress indices on selected hospital admissions in Sydney, Australia.

    PubMed

    Goldie, James; Alexander, Lisa; Lewis, Sophie C; Sherwood, Steven

    2017-08-01

    To find appropriate regression model specifications for counts of the daily hospital admissions of a Sydney cohort and determine which human heat stress indices best improve the models' fit. We built parent models of eight daily counts of admission records using weather station observations, census population estimates and public holiday data. We added heat stress indices; models with lower Akaike Information Criterion scores were judged a better fit. Five of the eight parent models demonstrated adequate fit. Daily maximum Simplified Wet Bulb Globe Temperature (sWBGT) consistently improved fit more than most other indices; temperature and heatwave indices also modelled some health outcomes well. Humidity and heat-humidity indices better fit counts of patients who died following admission. Maximum sWBGT is an ideal measure of heat stress for these types of Sydney hospital admissions. Simple temperature indices are a good fallback where a narrower range of conditions is investigated. Implications for public health: This study confirms the importance of selecting appropriate heat stress indices for modelling. Epidemiologists projecting Sydney hospital admissions should use maximum sWBGT as a common measure of heat stress. Health organisations interested in short-range forecasting may prefer simple temperature indices. © 2017 The Authors.

  2. The relationship between compressive strength and flexural strength of pavement geopolymer grouting material

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.

    2018-01-01

    To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.

  3. Prediction of optimum sorption isotherm: comparison of linear and non-linear method.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-11-11

    Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  4. Equilibrium, kinetics and process design of acid yellow 132 adsorption onto red pine sawdust.

    PubMed

    Can, Mustafa

    2015-01-01

    Linear and non-linear regression procedures have been applied to the Langmuir, Freundlich, Tempkin, Dubinin-Radushkevich, and Redlich-Peterson isotherms for adsorption of acid yellow 132 (AY132) dye onto red pine (Pinus resinosa) sawdust. The effects of parameters such as particle size, stirring rate, contact time, dye concentration, adsorption dose, pH, and temperature were investigated, and interaction was characterized by Fourier transform infrared spectroscopy and field emission scanning electron microscope. The non-linear method of the Langmuir isotherm equation was found to be the best fitting model to the equilibrium data. The maximum monolayer adsorption capacity was found as 79.5 mg/g. The calculated thermodynamic results suggested that AY132 adsorption onto red pine sawdust was an exothermic, physisorption, and spontaneous process. Kinetics was analyzed by four different kinetic equations using non-linear regression analysis. The pseudo-second-order equation provides the best fit with experimental data.

  5. Visualisation of Lines of Best Fit

    ERIC Educational Resources Information Center

    Rudziewicz, Michael; Bossé, Michael J.; Marland, Eric S.; Rhoads, Gregory S.

    2017-01-01

    Humans possess a remarkable ability to recognise both simple patterns such as shapes and handwriting and very complex patterns such as faces and landscapes. To investigate one small aspect of human pattern recognition, in this study participants position lines of "best fit" to two-dimensional scatter plots of data. The study investigates…

  6. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  7. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  8. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy.

    PubMed

    Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C

    2010-02-01

    To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.

  9. Development and validation of a predictive equation for lean body mass in children and adolescents.

    PubMed

    Foster, Bethany J; Platt, Robert W; Zemel, Babette S

    2012-05-01

    Lean body mass (LBM) is not easy to measure directly in the field or clinical setting. Equations to predict LBM from simple anthropometric measures, which account for the differing contributions of fat and lean to body weight at different ages and levels of adiposity, would be useful to both human biologists and clinicians. To develop and validate equations to predict LBM in children and adolescents across the entire range of the adiposity spectrum. Dual energy X-ray absorptiometry was used to measure LBM in 836 healthy children (437 females) and linear regression was used to develop sex-specific equations to estimate LBM from height, weight, age, body mass index (BMI) for age z-score and population ancestry. Equations were validated using bootstrapping methods and in a local independent sample of 332 children and in national data collected by NHANES. The mean difference between measured and predicted LBM was - 0.12% (95% limits of agreement - 11.3% to 8.5%) for males and - 0.14% ( - 11.9% to 10.9%) for females. Equations performed equally well across the entire adiposity spectrum, as estimated by BMI z-score. Validation indicated no over-fitting. LBM was predicted within 5% of measured LBM in the validation sample. The equations estimate LBM accurately from simple anthropometric measures.

  10. Simultaneous Determination of Benzene and Toluene in Pesticide Emulsifiable Concentrate by Headspace GC-MS

    PubMed Central

    Jiang, Hua; Yang, Jing; Fan, Li; Li, Fengmin; Huang, Qiliang

    2013-01-01

    The toxic inert ingredients in pesticide formulations are strictly regulated in many countries. In this paper, a simple and efficient headspace-gas chromatography-mass spectrometry (HSGC-MS) method using fluorobenzene as an internal standard (IS) for rapid simultaneous determination of benzene and toluene in pesticide emulsifiable concentrate (EC) was established. The headspace and GC-MS conditions were investigated and developed. A nonpolar fused silica Rtx-5 capillary column (30 m × 0.20 mm i.d. and 0.25 μm film thickness) with temperature programming was used. Under optimized headspace conditions, equilibration temperature of 120°C, equilibration time of 5 min, and sample size of 50 μL, the regression of the peak area ratios of benzene and toluene to IS on the concentrations of analytes fitted a linear relationship well at the concentration levels ranging from 3.2 g/L to 16.0 g/L. Standard additions of benzene and toluene to blank different matrix solutions 1ead to recoveries of 100.1%–109.5% with a relative standard deviation (RSD) of 0.3%–8.1%. The method presented here stands out as simple and easily applicable, which provides a way for the determination of toxic volatile adjuvant in liquid pesticide formulations. PMID:23607048

  11. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  12. Direct identification of predator-prey dynamics in gyrokinetic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Sumire, E-mail: sumire.kobayashi@lpp.polytechnique.fr; Gürcan, Özgür D; Diamond, Patrick H.

    2015-09-15

    The interaction between spontaneously formed zonal flows and small-scale turbulence in nonlinear gyrokinetic simulations is explored in a shearless closed field line geometry. It is found that when clear limit cycle oscillations prevail, the observed turbulent dynamics can be quantitatively captured by a simple Lotka-Volterra type predator-prey model. Fitting the time traces of full gyrokinetic simulations by such a reduced model allows extraction of the model coefficients. Scanning physical plasma parameters, such as collisionality and density gradient, it was observed that the effective growth rates of turbulence (i.e., the prey) remain roughly constant, in spite of the higher and varyingmore » level of primary mode linear growth rates. The effective growth rate that was extracted corresponds roughly to the zonal-flow-modified primary mode growth rate. It was also observed that the effective damping of zonal flows (i.e., the predator) in the parameter range, where clear predator-prey dynamics is observed, (i.e., near marginal stability) agrees with the collisional damping expected in these simulations. This implies that the Kelvin-Helmholtz-like instability may be negligible in this range. The results imply that when the tertiary instability plays a role, the dynamics becomes more complex than a simple Lotka-Volterra predator prey.« less

  13. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    NASA Astrophysics Data System (ADS)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  14. Maternal heterozygosity and progeny fitness association in an inbred Scots pine population.

    PubMed

    Abrahamsson, S; Ahlinder, J; Waldmann, P; García-Gil, M R

    2013-03-01

    Associations between heterozygosity and fitness traits have typically been investigated in populations characterized by low levels of inbreeding. We investigated the associations between standardized multilocus heterozygosity (stMLH) in mother trees (obtained from12 nuclear microsatellite markers) and five fitness traits measured in progenies from an inbred Scots pine population. The traits studied were proportion of sound seed, mean seed weight, germination rate, mean family height of one-year old seedlings under greenhouse conditions (GH) and mean family height of three-year old seedlings under field conditions (FH). The relatively high average inbreeding coefficient (F) in the population under study corresponds to a mixture of trees with different levels of co-ancestry, potentially resulting from a recent bottleneck. We used both frequentist and Bayesian methods of polynomial regression to investigate the presence of linear and non-linear relations between stMLH and each of the fitness traits. No significant associations were found for any of the traits except for GH, which displayed negative linear effect with stMLH. Negative HFC for GH could potentially be explained by the effect of heterosis caused by mating of two inbred mother trees (Lippman and Zamir 2006), or outbreeding depression at the most heterozygote trees and its negative impact on the fitness of the progeny, while their simultaneous action is also possible (Lynch. 1991). However,since this effect wasn't detected for FH, we cannot either rule out that the greenhouse conditions introduce artificial effects that disappear under more realistic field conditions.

  15. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    NASA Astrophysics Data System (ADS)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  16. Isotherm investigation for the sorption of fluoride onto Bio-F: comparison of linear and non-linear regression method

    NASA Astrophysics Data System (ADS)

    Yadav, Manish; Singh, Nitin Kumar

    2017-12-01

    A comparison of the linear and non-linear regression method in selecting the optimum isotherm among three most commonly used adsorption isotherms (Langmuir, Freundlich, and Redlich-Peterson) was made to the experimental data of fluoride (F) sorption onto Bio-F at a solution temperature of 30 ± 1 °C. The coefficient of correlation (r2) was used to select the best theoretical isotherm among the investigated ones. A total of four Langmuir linear equations were discussed and out of which linear form of most popular Langmuir-1 and Langmuir-2 showed the higher coefficient of determination (0.976 and 0.989) as compared to other Langmuir linear equations. Freundlich and Redlich-Peterson isotherms showed a better fit to the experimental data in linear least-square method, while in non-linear method Redlich-Peterson isotherm equations showed the best fit to the tested data set. The present study showed that the non-linear method could be a better way to obtain the isotherm parameters and represent the most suitable isotherm. Redlich-Peterson isotherm was found to be the best representative (r2 = 0.999) for this sorption system. It is also observed that the values of β are not close to unity, which means the isotherms are approaching the Freundlich but not the Langmuir isotherm.

  17. A simple prescription for simulating and characterizing gravitational arcs

    NASA Astrophysics Data System (ADS)

    Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.

    2013-01-01

    Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.

  18. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  19. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  20. Multi-Mode Analysis of Dual Ridged Waveguide Systems for Material Characterization

    DTIC Science & Technology

    2015-09-17

    characterization is the process of determining the dielectric, magnetic, and magnetoelectric properties of a material. For simple (i.e., linear ...field expressions in terms of elementary functions (sines, cosines, exponentials and Bessel functions) and corresponding propagation constants of the...with material parameters 0 and µ0. • The MUT is simple ( linear , isotropic, homogeneous), and the sample has a uniform thickness. • The waveguide

  1. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  2. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    PubMed

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  3. Estimation of Thalamocortical and Intracortical Network Models from Joint Thalamic Single-Electrode and Cortical Laminar-Electrode Recordings in the Rat Barrel System

    PubMed Central

    Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.

    2009-01-01

    A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875

  4. Rapid detection of microbial cell abundance in aquatic systems

    DOE PAGES

    Rocha, Andrea M.; Yuan, Quan; Close, Dan M.; ...

    2016-06-01

    The detection and quantification of naturally occurring microbial cellular densities is an essential component of environmental systems monitoring. While there are a number of commonly utilized approaches for monitoring microbial abundance, capacitance-based biosensors represent a promising approach because of their low-cost and label-free detection of microbial cells, but are not as well characterized as more traditional methods. Here, we investigate the applicability of enhanced alternating current electrokinetics (ACEK) capacitive sensing as a new application for rapidly detecting and quantifying microbial cellular densities in cultured and environmentally sourced aquatic samples. ACEK capacitive sensor performance was evaluated using two distinct and dynamicmore » systems the Great Australian Bight and groundwater from the Oak Ridge Reservation in Oak Ridge, TN. Results demonstrate that ACEK capacitance-based sensing can accurately determine microbial cell counts throughout cellular concentrations typically encountered in naturally occurring microbial communities (10 3 – 10 6 cells/mL). A linear relationship was observed between cellular density and capacitance change correlations, allowing a simple linear curve fitting equation to be used for determining microbial abundances in unknown samples. As a result, this work provides a foundation for understanding the limits of capacitance-based sensing in natural environmental samples and supports future efforts focusing on evaluating the robustness ACEK capacitance-based within aquatic environments.« less

  5. Rapid detection of microbial cell abundance in aquatic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocha, Andrea M.; Yuan, Quan; Close, Dan M.

    The detection and quantification of naturally occurring microbial cellular densities is an essential component of environmental systems monitoring. While there are a number of commonly utilized approaches for monitoring microbial abundance, capacitance-based biosensors represent a promising approach because of their low-cost and label-free detection of microbial cells, but are not as well characterized as more traditional methods. Here, we investigate the applicability of enhanced alternating current electrokinetics (ACEK) capacitive sensing as a new application for rapidly detecting and quantifying microbial cellular densities in cultured and environmentally sourced aquatic samples. ACEK capacitive sensor performance was evaluated using two distinct and dynamicmore » systems the Great Australian Bight and groundwater from the Oak Ridge Reservation in Oak Ridge, TN. Results demonstrate that ACEK capacitance-based sensing can accurately determine microbial cell counts throughout cellular concentrations typically encountered in naturally occurring microbial communities (10 3 – 10 6 cells/mL). A linear relationship was observed between cellular density and capacitance change correlations, allowing a simple linear curve fitting equation to be used for determining microbial abundances in unknown samples. As a result, this work provides a foundation for understanding the limits of capacitance-based sensing in natural environmental samples and supports future efforts focusing on evaluating the robustness ACEK capacitance-based within aquatic environments.« less

  6. Dust in a compact, cold, high-velocity cloud: A new approach to removing foreground emission

    NASA Astrophysics Data System (ADS)

    Lenz, D.; Flöer, L.; Kerp, J.

    2016-02-01

    Context. Because isolated high-velocity clouds (HVCs) are found at great distances from the Galactic radiation field and because they have subsolar metallicities, there have been no detections of dust in these structures. A key problem in this search is the removal of foreground dust emission. Aims: Using the Effelsberg-Bonn H I Survey and the Planck far-infrared data, we investigate a bright, cold, and clumpy HVC. This cloud apparently undergoes an interaction with the ambient medium and thus has great potential to form dust. Methods: To remove the local foreground dust emission we used a regularised, generalised linear model and we show the advantages of this approach with respect to other methods. To estimate the dust emissivity of the HVC, we set up a simple Bayesian model with mildly informative priors to perform the line fit instead of an ordinary linear least-squares approach. Results: We find that the foreground can be modelled accurately and robustly with our approach and is limited mostly by the cosmic infrared background. Despite this improvement, we did not detect any significant dust emission from this promising HVC. The 3σ-equivalent upper limit to the dust emissivity is an order of magnitude below the typical values for the Galactic interstellar medium.

  7. Experimental design and data analysis of Ago-RIP-Seq experiments for the identification of microRNA targets.

    PubMed

    Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger

    2017-03-31

    The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. A simple method for the computation of first neighbour frequencies of DNAs from CD spectra

    PubMed Central

    Marck, Christian; Guschlbauer, Wilhelm

    1978-01-01

    A procedure for the computation of the first neighbour frequencies of DNA's is presented. This procedure is based on the first neighbour approximation of Gray and Tinoco. We show that the knowledge of all the ten elementary CD signals attached to the ten double stranded first neighbour configurations is not necessary. One can obtain the ten frequencies of an unknown DNA with the use of eight elementary CD signals corresponding to eight linearly independent polymer sequences. These signals can be extracted very simply from any eight or more CD spectra of double stranded DNA's of known frequencies. The ten frequencies of a DNA are obtained by least square fit of its CD spectrum with these elementary signals. One advantage of this procedure is that it does not necessitate linear programming, it can be used with CD data digitalized using a large number of wavelengths, thus permitting an accurate resolution of the CD spectra. Under favorable case, the ten frequencies of a DNA (not used as input data) can be determined with an average absolute error < 2%. We have also observed that certain satellite DNA's, those of Drosophila virilis and Callinectes sapidus have CD spectra compatible with those of DNA's of quasi random sequence; these satellite DNA's should adopt also the B-form in solution. PMID:673843

  9. Optimization of linear and branched alkane interactions with water to simulate hydrophobic hydration

    NASA Astrophysics Data System (ADS)

    Ashbaugh, Henry S.; Liu, Lixin; Surampudi, Lalitanand N.

    2011-08-01

    Previous studies of simple gas hydration have demonstrated that the accuracy of molecular simulations at capturing the thermodynamic signatures of hydrophobic hydration is linked both to the fidelity of the water model at replicating the experimental liquid density at ambient pressure and an accounting of polarization interactions between the solute and water. We extend those studies to examine alkane hydration using the transferable potentials for phase equilibria united-atom model for linear and branched alkanes, developed to reproduce alkane phase behavior, and the TIP4P/2005 model for water, which provides one of the best descriptions of liquid water for the available fixed-point charge models. Alkane site/water oxygen Lennard-Jones cross interactions were optimized to reproduce the experimental alkane hydration free energies over a range of temperatures. The optimized model reproduces the hydration free energies of the fitted alkanes with a root mean square difference between simulation and experiment of 0.06 kcal/mol over a wide temperature range, compared to 0.44 kcal/mol for the parent model. The optimized model accurately reproduces the temperature dependence of hydrophobic hydration, as characterized by the hydration enthalpies, entropies, and heat capacities, as well as the pressure response, as characterized by partial molar volumes.

  10. Modeling of inactivation of surface borne microorganisms occurring on seeds by cold atmospheric plasma (CAP)

    NASA Astrophysics Data System (ADS)

    Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.

    2012-10-01

    Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.

  11. Adiposity as a full mediator of the influence of cardiorespiratory fitness and inflammation in schoolchildren: The FUPRECOL Study.

    PubMed

    Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R

    2017-06-01

    Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.

  12. Non-linearity of the collagen triple helix in solution and implications for collagen function.

    PubMed

    Walker, Kenneth T; Nan, Ruodan; Wright, David W; Gor, Jayesh; Bishop, Anthony C; Makhatadze, George I; Brodsky, Barbara; Perkins, Stephen J

    2017-06-16

    Collagen adopts a characteristic supercoiled triple helical conformation which requires a repeating (Xaa-Yaa-Gly) n sequence. Despite the abundance of collagen, a combined experimental and atomistic modelling approach has not so far quantitated the degree of flexibility seen experimentally in the solution structures of collagen triple helices. To address this question, we report an experimental study on the flexibility of varying lengths of collagen triple helical peptides, composed of six, eight, ten and twelve repeats of the most stable Pro-Hyp-Gly (POG) units. In addition, one unblocked peptide, (POG) 10unblocked , was compared with the blocked (POG) 10 as a control for the significance of end effects. Complementary analytical ultracentrifugation and synchrotron small angle X-ray scattering data showed that the conformations of the longer triple helical peptides were not well explained by a linear structure derived from crystallography. To interpret these data, molecular dynamics simulations were used to generate 50 000 physically realistic collagen structures for each of the helices. These structures were fitted against their respective scattering data to reveal the best fitting structures from this large ensemble of possible helix structures. This curve fitting confirmed a small degree of non-linearity to exist in these best fit triple helices, with the degree of bending approximated as 4-17° from linearity. Our results open the way for further studies of other collagen triple helices with different sequences and stabilities in order to clarify the role of molecular rigidity and flexibility in collagen extracellular and immune function and disease. © 2017 The Author(s).

  13. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  14. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  15. Foreground Bias from Parametric Models of Far-IR Dust Emission

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Fixsen, D. J.

    2016-01-01

    We use simple toy models of far-IR dust emission to estimate the accuracy to which the polarization of the cosmic microwave background can be recovered using multi-frequency fits, if the parametric form chosen for the fitted dust model differs from the actual dust emission. Commonly used approximations to the far-IR dust spectrum yield CMB residuals comparable to or larger than the sensitivities expected for the next generation of CMB missions, despite fitting the combined CMB plus foreground emission to precision 0.1 percent or better. The Rayleigh-Jeans approximation to the dust spectrum biases the fitted dust spectral index by (Delta)(Beta)(sub d) = 0.2 and the inflationary B-mode amplitude by (Delta)(r) = 0.03. Fitting the dust to a modified blackbody at a single temperature biases the best-fit CMB by (Delta)(r) greater than 0.003 if the true dust spectrum contains multiple temperature components. A 13-parameter model fitting two temperature components reduces this bias by an order of magnitude if the true dust spectrum is in fact a simple superposition of emission at different temperatures, but fails at the level (Delta)(r) = 0.006 for dust whose spectral index varies with frequency. Restricting the observing frequencies to a narrow region near the foreground minimum reduces these biases for some dust spectra but can increase the bias for others. Data at THz frequencies surrounding the peak of the dust emission can mitigate these biases while providing a direct determination of the dust temperature profile.

  16. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  17. Anticancer Agents Based on a New Class of Transition- State Analog Inhibitors for Serine and Cysteine Proteases

    DTIC Science & Technology

    1999-08-01

    electrostatic repulsion between the het- eroatom and the ketone. Swain and Lupton31 have constructed a modified Hammett equation (eq 2) in which they...determined by nonlinear fit to the Michaelis-Menten equation for competitive inhibition using simple weighing. Competitive inhibition was confirmed... equation for competitive inhibition using simple weighing. Competitive inhibition was confirmed by Lineweaver - Burk analysis using simple

  18. Quantification of Liver Proton-Density Fat Fraction in an 7.1 Tesla preclinical MR Systems: Impact of the Fitting Technique

    PubMed Central

    Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP

    2016-01-01

    Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806

  19. Diversity of gastrointestinal helminths in Dall's sheep and the negative association of the abomasal nematode, Marshallagia marshalli, with fitness indicators

    USDA-ARS?s Scientific Manuscript database

    Gastrointestinal helminths can have a detrimental effect on the fitness of wild ungulates. Arctic and Subarctic ecosystems are ideal for the study of host-parasite interactions due to the comparatively simple ecological interactions and limited confounding factors. We used a unique dataset collected...

  20. Fitness and Nutrition Activity Book for Grades 4-6.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Health, Columbus.

    This activity book is designed to supplement health lessons on nutrition and physical fitness for fourth, fifth, and sixth grade students. Some of the activities are quite simple and require very little instruction and direction, while others are more difficult and require careful explanation prior to completion. The level of difficulty of the…

  1. EVALUATING EFFECTS OF LOW QUALITY HABITATS ON REGIONAL GROWTH IN PEOMYCUS LEUCOPUS: INSIGHTS FROM FIELD-PARAMETERIZED SPATIAL MATRIX MODELS.

    EPA Science Inventory

    Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...

  2. Measuring Slit Width and Separation in a Diffraction Experiment

    ERIC Educational Resources Information Center

    Gan, K. K.; Law, A. T.

    2009-01-01

    We present a procedure for measuring slit width and separation in single- and double-slit diffraction experiments. Intensity spectra of diffracted laser light are measured with an optical sensor (PIN diode). Slit widths and separations are extracted by fitting to the measured spectra. We present a simple fitting procedure to account for the…

  3. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  4. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  5. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  6. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    PubMed

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  7. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  8. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  9. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.

    PubMed

    Park, Hyoung-Jun; Song, Minho

    2008-10-29

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.

  10. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  11. Paper-cutting operations using scissors in Drury's law tasks.

    PubMed

    Yamanaka, Shota; Miyashita, Homei

    2018-05-01

    Human performance modeling is a core topic in ergonomics. In addition to deriving models, it is important to verify the kinds of tasks that can be modeled. Drury's law is promising for path tracking tasks such as navigating a path with pens or driving a car. We conducted an experiment based on the observation that paper-cutting tasks using scissors resemble such tasks. The results showed that cutting arc-like paths (1/4 of a circle) showed an excellent fit with Drury's law (R 2  > 0.98), whereas cutting linear paths showed a worse fit (R 2  > 0.87). Since linear paths yielded better fits when path amplitudes were divided (R 2  > 0.99 for all amplitudes), we discuss the characteristics of paper-cutting operations using scissors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Linear Combination Fitting (LCF)-XANES analysis of As speciation in selected mine-impacted materials

    EPA Pesticide Factsheets

    This table provides sample identification labels and classification of sample type (tailings, calcinated, grey slime). For each sample, total arsenic and iron concentrations determined by acid digestion and ICP analysis are provided along with arsenic in-vitro bioaccessibility (As IVBA) values to estimate arsenic risk. Lastly, the table provides linear combination fitting results from synchrotron XANES analysis showing the distribution of arsenic speciation phases present in each sample along with fitting error (R-factor).This dataset is associated with the following publication:Ollson, C., E. Smith, K. Scheckel, A. Betts, and A. Juhasz. Assessment of arsenic speciation and bioaccessibility in mine-impacted materials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 313: 130-137, (2016).

  13. Assessing Competencies Needed to Engage With Digital Health Services: Development of the eHealth Literacy Assessment Toolkit.

    PubMed

    Karnoe, Astrid; Furstrand, Dorthe; Christensen, Karl Bang; Norgaard, Ole; Kayser, Lars

    2018-05-10

    To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user's eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes. The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals' health literacy and digital literacy using a mix of existing and newly developed scales. From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: "functional health literacy," tool 2: "health literacy self-assessment," tool 3: "familiarity with health and health care," and tool 4: "knowledge of health and disease") and 3 digitally-related tools (tool 5: "technology familiarity," tool 6: "technology confidence," and tool 7: "incentives for engaging with technology") that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity. Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90. The eHLA consists of 7 short, robust scales that assess individual's knowledge and skills related to digital literacy and health literacy. ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.

  14. Testing the dose-response specification in epidemiology: public health and policy consequences for lead.

    PubMed

    Rothenberg, Stephen J; Rothenberg, Jesse C

    2005-09-01

    Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.

  15. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  16. Universal fitting formulae for baryon oscillation surveys

    NASA Astrophysics Data System (ADS)

    Blake, Chris; Parkinson, David; Bassett, Bruce; Glazebrook, Karl; Kunz, Martin; Nichol, Robert C.

    2006-01-01

    The next generation of galaxy surveys will attempt to measure the baryon oscillations in the clustering power spectrum with high accuracy. These oscillations encode a preferred scale which may be used as a standard ruler to constrain cosmological parameters and dark energy models. In this paper we present simple analytical fitting formulae for the accuracy with which the preferred scale may be determined in the tangential and radial directions by future spectroscopic and photometric galaxy redshift surveys. We express these accuracies as a function of survey parameters such as the central redshift, volume, galaxy number density and (where applicable) photometric redshift error. These fitting formulae should greatly increase the efficiency of optimizing future surveys, which requires analysis of a potentially vast number of survey configurations and cosmological models. The formulae are calibrated using a grid of Monte Carlo simulations, which are analysed by dividing out the overall shape of the power spectrum before fitting a simple decaying sinusoid to the oscillations. The fitting formulae reproduce the simulation results with a fractional scatter of 7 per cent (10 per cent) in the tangential (radial) directions over a wide range of input parameters. We also indicate how sparse-sampling strategies may enhance the effective survey area if the sampling scale is much smaller than the projected baryon oscillation scale.

  17. Qualified Fitness and Exercise as Professionals and Exercise Prescription: Evolution of the PAR-Q and Canadian Aerobic Fitness Test.

    PubMed

    Shephard, Roy J

    2015-04-01

    Traditional approaches to exercise prescription have included a preliminary medical screening followed by exercise tests of varying sophistication. To maximize population involvement, qualified fitness and exercise professionals (QFEPs) have used a self-administered screening questionnaire (the Physical Activity Readiness Questionnaire, PAR-Q) and a simple measure of aerobic performance (the Canadian Aerobic Fitness Test, CAFT). However, problems have arisen in applying the original protocol to those with chronic disease. Recent developments have addressed these issues. Evolution of the PAR-Q and CAFT protocol is reviewed from their origins in 1974 to the current electronic decision tree model of exercise screening and prescription. About a fifth of apparently healthy adults responded positively to the original PAR-Q instrument, thus requiring an often unwarranted referral to a physician. Minor changes of wording did not overcome this problem. However, a consensus process has now developed an electronic decision tree for stratification of exercise risk not only for healthy individuals, but also for those with various types of chronic disease. The new approach to clearance greatly reduces physician referrals and extends the role of QFEPs. The availability of effective screening and simple fitness testing should contribute to the goal of maximizing physical activity in the entire population.

  18. Study of the anticorrelations between ozone and UV-B radiation using linear and exponential fits in Southern Brazil

    NASA Astrophysics Data System (ADS)

    Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.

    Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.

  19. Conceptualization of the Sexual Response Models in Men: Are there Differences Between Sexually Functional and Dysfunctional Men?

    PubMed

    Connaughton, Catherine; McCabe, Marita; Karantzas, Gery

    2016-03-01

    Research to validate models of sexual response empirically in men with and without sexual dysfunction (MSD), as currently defined, is limited. To explore the extent to which the traditional linear or the Basson circular model best represents male sexual response for men with MSD and sexually functional men. In total, 573 men completed an online questionnaire to assess sexual function and aspects of the models of sexual response. In total, 42.2% of men (242) were sexually functional, and 57.8% (331) had at least one MSD. Models were built and tested using bootstrapping and structural equation modeling. Fit of models for men with and without MSD. The linear model and the initial circular model were a poor fit for men with and without MSD. A modified version of the circular model demonstrated adequate fit for the two groups and showed important interactions between psychological factors and sexual response for men with and without MSD. Male sexual response was not represented by the linear model for men with or without MSD, excluding possible healthy responsive desire. The circular model provided a better fit for the two groups of men but demonstrated that the relations between psychological factors and phases of sexual response were different for men with and without MSD as currently defined. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  20. Dynamical properties of maps fitted to data in the noise-free limit

    PubMed Central

    Lindström, Torsten

    2013-01-01

    We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079

  1. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  2. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  3. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  4. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  5. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  6. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  7. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  8. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  9. Simple, Flexible, Trigonometric Taper Equations

    Treesearch

    Charles E. Thomas; Bernard R. Parresol

    1991-01-01

    There have been numerous approaches to modeling stem form in recent decades. The majority have concentrated on the simpler coniferous bole form and have become increasingly complex mathematical expressions. Use of trigonometric equations provides a simple expression of taper that is flexible enough to fit both coniferous and hard-wood bole forms. As an illustration, we...

  10. Commande optimale minimisant la consommation d'energie d'un drone utilise comme relai de communication

    NASA Astrophysics Data System (ADS)

    Mechirgui, Monia

    The purpose of this project is to implement an optimal control regulator, particularly the linear quadratic regulator in order to control the position of an unmanned aerial vehicle known as a quadrotor. This type of UAV has a symmetrical and simple structure. Thus, its control is relatively easy compared to conventional helicopters. Optimal control can be proven to be an ideal controller to reconcile between the tracking performance and energy consumption. In practice, the linearity requirements are not met, but some elaborations of the linear quadratic regulator have been used in many nonlinear applications with good results. The linear quadratic controller used in this thesis is presented in two forms: simple and adapted to the state of charge of the battery. Based on the traditional structure of the linear quadratic regulator, we introduced a new criterion which relies on the state of charge of the battery, in order to optimize energy consumption. This command is intended to be used to monitor and maintain the desired trajectory during several maneuvers while minimizing energy consumption. Both simple and adapted, linear quadratic controller are implemented in Simulink in discrete time. The model simulates the dynamics and control of a quadrotor. Performance and stability of the system are analyzed with several tests, from the simply hover to the complex trajectories in closed loop.

  11. Stage Costumes for Girls.

    ERIC Educational Resources Information Center

    Greenhowe, Jean

    This book contains full instructions for making 14 costumes for girls to fit any sizes up to about 147 cm (4 feet 10 inches) in height. All the garments can be made to fit any child's individual measurements without the need of complicated pattern pieces. Simple basic shapes such as rectangles and circles are used for the patterns and the only…

  12. How Transformational Leadership Influences Work Engagement Among Nurses: Does Person-Job Fit Matter?

    PubMed

    Enwereuzor, Ibeawuchi K; Ugwu, Leonard I; Eze, Onyinyechi A

    2018-03-01

    The current study examines whether person-job fit moderates the relationship between transformational leadership and work engagement. Data were collected using cross-sectional design from 224 (15 male and 209 female) hospital nurses. Participants completed measures of transformational leadership, person-job fit, and work engagement. Moderated multiple regression results showed that transformational leadership had a significant positive predictive relationship with work engagement, and person-job fit had a significant positive predictive relationship with work engagement. Simple slope analysis showed that person-job fit moderated the relationship between transformational leadership and work engagement such that transformational leadership was more positively related to work engagement for nurses with high person-job fit compared with those with low person-job fit. Thus, all the hypotheses were confirmed. The findings were discussed, and suggestions for future research were offered.

  13. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  14. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  15. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  16. Modelling the isometric force response to multiple pulse stimuli in locust skeletal muscle.

    PubMed

    Wilson, Emma; Rustighi, Emiliano; Mace, Brian R; Newland, Philip L

    2011-02-01

    An improved model of locust skeletal muscle will inform on the general behaviour of invertebrate and mammalian muscle with the eventual aim of improving biomedical models of human muscles, embracing prosthetic construction and muscle therapy. In this article, the isometric response of the locust hind leg extensor muscle to input pulse trains is investigated. Experimental data was collected by stimulating the muscle directly and measuring the force at the tibia. The responses to constant frequency stimulus trains of various frequencies and number of pulses were decomposed into the response to each individual stimulus. Each individual pulse response was then fitted to a model, it being assumed that the response to each pulse could be approximated as an impulse response and was linear, no assumption were made about the model order. When the interpulse frequency (IPF) was low and the number of pulses in the train small, a second-order model provided a good fit to each pulse. For moderate IPF or for long pulse trains a linear third-order model provided a better fit to the response to each pulse. The fit using a second-order model deteriorated with increasing IPF. When the input comprised higher IPFs with a large number of pulses the assumptions that the response was linear could not be confirmed. A generalised model is also presented. This model is second-order, and contains two nonlinear terms. The model is able to capture the force response to a range of inputs. This includes cases where the input comprised of higher frequency pulse trains and the assumption of quasi-linear behaviour could not be confirmed.

  17. Improving Kepler Pipeline Sensitivity with Pixel Response Function Photometry.

    NASA Astrophysics Data System (ADS)

    Morris, Robert L.; Bryson, Steve; Jenkins, Jon Michael; Smith, Jeffrey C

    2014-06-01

    We present the results of our investigation into the feasibility and expected benefits of implementing PRF-fitting photometry in the Kepler Science Processing Pipeline. The Kepler Pixel Response Function (PRF) describes the expected system response to a point source at infinity and includes the effects of the optical point spread function, the CCD detector responsivity function, and spacecraft pointing jitter. Planet detection in the Kepler pipeline is currently based on simple aperture photometry (SAP), which is most effective when applied to uncrowded bright stars. Its effectiveness diminishes rapidly as target brightness decreases relative to the effects of noise sources such as detector electronics, background stars, and image motion. In contrast, PRF photometry is based on fitting an explicit model of image formation to the data and naturally accounts for image motion and contributions of background stars. The key to obtaining high-quality photometry from PRF fitting is a high-quality model of the system's PRF, while the key to efficiently processing the large number of Kepler targets is an accurate catalog and accurate mapping of celestial coordinates onto the focal plane. If the CCD coordinates of stellar centroids are known a priori then the problem of PRF fitting becomes linear. A model of the Kepler PRF was constructed at the time of spacecraft commissioning by fitting piecewise polynomial surfaces to data from dithered full frame images. While this model accurately captured the initial state of the system, the PRF has evolved dynamically since then and has been seen to deviate significantly from the initial (static) model. We construct a dynamic PRF model which is then used to recover photometry for all targets of interest. Both simulation tests and results from Kepler flight data demonstrate the effectiveness of our approach. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.

  18. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  19. Bayesian parameter estimation for stochastic models of biological cell migration

    NASA Astrophysics Data System (ADS)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  20. A fiber orientation-adapted integration scheme for computing the hyperelastic Tucker average for short fiber reinforced composites

    NASA Astrophysics Data System (ADS)

    Goldberg, Niels; Ospald, Felix; Schneider, Matti

    2017-10-01

    In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.

  1. A novel multiple headspace extraction gas chromatographic method for measuring the diffusion coefficient of methanol in water and in olive oil.

    PubMed

    Zhang, Chun-Yun; Chai, Xin-Sheng

    2015-03-13

    A novel method for the determination of the diffusion coefficient (D) of methanol in water and olive oil has been developed. Based on multiple headspace extraction gas chromatography (MHE-GC), the methanol released from the liquid sample of interest in a closed sample vial was determined in a stepwise fashion. A theoretical model was derived to establish the relationship between the diffusion coefficient and the GC signals from MHE-GC measurements. The results showed that the present method has an excellent precision (RSD<1%) in the linear fitting procedure and good accuracy for the diffusion coefficients of methanol in both water and olive oil, when compared with data reported in the literature. The present method is simple and practical and can be a valuable tool for the determination of the diffusion coefficient of volatile analyte(s) into food simulants from food and beverage packaging material, both in research studies and in actual applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Analysis of hop-derived terpenoids in beer and evaluation of their behavior using the stir bar-sorptive extraction method with GC-MS.

    PubMed

    Kishimoto, Toru; Wanikawa, Akira; Kagami, Noboru; Kawatsura, Katsuyuki

    2005-06-15

    Hop aroma components, which mainly comprise terpenoids, contribute to the character of beers. However, pretreatments are necessary before analyzing these components because of their trace levels and complicated matrixes. Here, the stir bar-sorptive extraction (SBSE) method was used to detect and quantify many terpenoids simultaneously from small samples. This simple technique showed low coefficients of variation, high accuracy, and low detection limits. An investigation of the behavior of terpenoids identified two distinct patterns of decreasing concentration during wort boiling. The first, which was seen in myrcene and linalool, involved a rapid decrease that was best fitted by a quadratic curve. The second, which was observed in beta-eudesmol, humulene, humulene epoxide I, beta-farnesene, caryophyllene, and geraniol, involved a gentle linear decrease. Conversely, the concentration of beta-damascenone increased after boiling. As the aroma composition depended on the hop variety, we also examined the relationship between terpenoid content and sensory analysis in beer.

  3. Experimental study and finite element analysis based on equivalent load method for laser ultrasonic measurement of elastic constants.

    PubMed

    Zhan, Yu; Liu, Changsheng; Zhang, Fengpeng; Qiu, Zhaoguo

    2016-07-01

    The laser ultrasonic generation of Rayleigh surface wave and longitudinal wave in an elastic plate is studied by experiment and finite element method. In order to eliminate the measurement error and the time delay of the experimental system, the linear fitting method of experimental data is applied. The finite element analysis software ABAQUS is used to simulate the propagation of Rayleigh surface wave and longitudinal wave caused by laser excitation on a sheet metal sample surface. The equivalent load method is proposed and applied. The pulsed laser is equivalent to the surface load in time and space domain to meet the Gaussian profile. The relationship between the physical parameters of the laser and the load is established by the correction factor. The numerical solution is in good agreement with the experimental result. The simple and effective numerical and experimental methods for laser ultrasonic measurement of the elastic constants are demonstrated. Copyright © 2016. Published by Elsevier B.V.

  4. Mechanical Properties of Nylon Harp Strings

    PubMed Central

    Lynch-Aird, Nicolas; Woodhouse, Jim

    2017-01-01

    Monofilament nylon strings with a range of diameters, commercially marketed as harp strings, have been tested to establish their long-term mechanical properties. Once a string had settled into a desired stress state, the Young’s modulus was measured by a variety of methods that probe different time-scales. The modulus was found to be a strong function of testing frequency and also a strong function of stress. Strings were also subjected to cyclical variations of temperature, allowing various thermal properties to be measured: the coefficient of linear thermal expansion and the thermal sensitivities of tuning, Young’s modulus and density. The results revealed that the particular strings tested are divided into two groups with very different properties: stress-strain behaviour differing by a factor of two and some parametric sensitivities even having the opposite sign. Within each group, correlation studies allowed simple functional fits to be found to the key properties, which have the potential to be used in automated tuning systems for harp strings. PMID:28772858

  5. Monitoring trace gases in downtown Toronto using open-path Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Byrne, B.; Strong, K.; Colebatch, O.; Fogal, P.; Mittermeier, R. L.; Wunch, D.; Jones, D. B. A.

    2017-12-01

    Emissions of greenhouse gases (GHGs) in urban environments can be highly heterogeneous. For example, vehicles produce point source emissions which can result in heterogeneous GHG concentrations on scales <10 m. The highly localized scale of these emissions can make it difficult to measure mean GHG concentrations on scales of 100-1000 m. Open-Path Fourier Transform Infrared Spectroscopy (OP-FTIR) measurements offer spatial averaging and continuous measurements of several trace gases simultaneously in the same airmass. We have set up an open-path system in downtown Toronto to monitor trace gases in the urban boundary layer. Concentrations of CO2, CO, CH4, and N2O are derived from atmospheric absorption spectra recorded over a two-way atmospheric open path of 320 m using non-linear least squares fitting. Using a simple box model and co-located boundary layer height measurements, we estimate surface fluxes of these gases in downtown Toronto from our OP-FTIR observations.

  6. Mechanical Properties of Nylon Harp Strings.

    PubMed

    Lynch-Aird, Nicolas; Woodhouse, Jim

    2017-05-04

    Monofilament nylon strings with a range of diameters, commercially marketed as harp strings, have been tested to establish their long-term mechanical properties. Once a string had settled into a desired stress state, the Young's modulus was measured by a variety of methods that probe different time-scales. The modulus was found to be a strong function of testing frequency and also a strong function of stress. Strings were also subjected to cyclical variations of temperature, allowing various thermal properties to be measured: the coefficient of linear thermal expansion and the thermal sensitivities of tuning, Young's modulus and density. The results revealed that the particular strings tested are divided into two groups with very different properties: stress-strain behaviour differing by a factor of two and some parametric sensitivities even having the opposite sign. Within each group, correlation studies allowed simple functional fits to be found to the key properties, which have the potential to be used in automated tuning systems for harp strings.

  7. The Light Side of Dark Matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia

    2013-04-01

    We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.

  8. Recovering Galaxy Properties Using Gaussian Process SED Fitting

    NASA Astrophysics Data System (ADS)

    Iyer, Kartheik; Awan, Humna

    2018-01-01

    Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.

  9. Influence of Two-Photon Absorption Anisotropy on Terahertz Emission Through Optical Rectification in Zinc-Blende Crystals

    NASA Astrophysics Data System (ADS)

    Sanjuan, Federico; Gaborit, Gwenaël; Coutaz, Jean-Louis

    2018-04-01

    We report for the first time on the observation of an angular anisotropy of the THz signal generated by optical rectification in a < 111 > ZnTe crystal. This cubic (zinc-blende) crystal in the < 111 > orientation exhibits both transverse isotropy for optical effects involving the linear χ (1) and nonlinear χ (2) susceptibilities. Thus, the observed anisotropy can only be related to χ (3) effect, namely two-photon absorption, which leads to the photo-generation of free carriers that absorb the generated THz signal. Two-photon absorption in zinc-blende crystals is known to be due to a spin-orbit interaction between the valence and higher-conduction bands. We perform a couple of measurements that confirm our hypothesis, as well as we fit the recorded data with a simple model. This two-photon absorption effect makes difficult an efficient generation, through optical rectification in < 111 > zinc-blende crystals, of THz beams of any given polarization state by only monitoring the laser pump polarization.

  10. Impact of kerogen heterogeneity on sorption of organic pollutants. 2. Sorption equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.; Yu, Z.Q.; Xiao, B.H.

    2009-08-15

    Phenanthrene and naphthalene sorption isotherms were measured for three different series of kerogen materials using completely mixed batch reactors. Sorption isotherms were nonlinear for each sorbate-sorbent system, and the Freundlich isotherm equation fit the sorption data well. The Freundlich isotherm linearity parameter n ranged from 0.192 to 0.729 for phenanthrene and from 0.389 to 0.731 for naphthalene. The n values correlated linearly with rigidity and aromaticity of the kerogen matrix, but the single-point, organic carbon-normalized distribution coefficients varied dramatically among the tested sorbents. A dual-mode sorption equation consisting of a linear partitioning domain and a Langmuir adsorption domain adequately quantifiedmore » the overall sorption equilibrium for each sorbent-sorbate system. Both models fit the data well, with r{sup 2} values of 0.965 to 0.996 for the Freundlich model and 0.963 to 0.997 for the dual-mode model for the phenanthrene sorption isotherms. The dual-mode model fitting results showed that as the rigidity and aromaticity of the kerogen matrix increased, the contribution of the linear partitioning domain to the overall sorption equilibrium decreased, whereas the contribution of the Langmuir adsorption domain increased. The present study suggested that kerogen materials found in soils and sediments should not be treated as a single, unified, carbonaceous sorbent phase.« less

  11. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  12. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  13. EVOLUTION OF HIGH-ENERGY PARTICLE DISTRIBUTION IN MATURE SHELL-TYPE SUPERNOVA REMNANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Houdun; Xin, Yuliang; Liu, Siming

    Multi-wavelength observations of mature supernova remnants (SNRs), especially with recent advances in γ -ray astronomy, make it possible to constrain energy distribution of energetic particles within these remnants. In consideration of the SNR origin of Galactic cosmic rays and physics related to particle acceleration and radiative processes, we use a simple one-zone model to fit the nonthermal emission spectra of three shell-type SNRs located within 2° on the sky: RX J1713.7−3946, CTB 37B, and CTB 37A. Although radio images of these three sources all show a shell (or half-shell) structure, their radio, X-ray, and γ -ray spectra are quite different,more » offering an ideal case to explore evolution of energetic particle distribution in SNRs. Our spectral fitting shows that (1) the particle distribution becomes harder with aging of these SNRs, implying a continuous acceleration process, and the particle distributions of CTB 37A and CTB 37B in the GeV range are harder than the hardest distribution that can be produced at a shock via the linear diffusive shock particle acceleration process, so spatial transport may play a role; (2) the energy loss timescale of electrons at the high-energy cutoff due to synchrotron radiation appears to be always a bit (within a factor of a few) shorter than the age of the corresponding remnant, which also requires continuous particle acceleration; (3) double power-law distributions are needed to fit the spectra of CTB 37B and CTB 37A, which may be attributed to shock interaction with molecular clouds.« less

  14. On the Observed Changes in Upper Stratospheric and Mesospheric Temperatures from UARS HALOE

    NASA Technical Reports Server (NTRS)

    Remsberg, Ellis E.

    2006-01-01

    Temperature versus pressure or T(p) time series from the Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS) have been extended and re-analyzed for the period of 1991-2005 and for the upper stratosphere and mesosphere in 10-degree wide latitude zones from 60S to 60N. Even though sampling from a solar occultation experiment is somewhat limited, it is shown to be quite adequate for developing both the seasonal and longer-term variations in T(p). Multiple linear regression (MLR) techniques were used in the re-analyses for the seasonal and the significant interannual, solar cycle (SC-like or decadal-scale), and linear trend terms. A simple SC-like term of 11-yr period was fitted to the time series residuals after accounting for the seasonal and interannual terms. Highly significant SC-like responses were found for both the upper mesosphere and the upper stratosphere. The phases of these SC-like terms were checked for their continuity with latitude and pressure-altitude, and in almost all cases they are directly in-phase with that of standard proxies for the solar flux variations. The analyzed, max minus min, responses at low latitudes are of order 1 K, while at middle latitudes they are as large as 3 K in the upper mesosphere. Highly significant, linear cooling trends were found at middle latitudes of the middle to upper mesosphere (about -2 K/decade), at tropical latitudes of the middle mesosphere (about -1 K/decade), and at 2 hPa (or order -1 K/decade).

  15. Dose Response for Chromosome Aberrations in Human Lymphocytes and Fibroblasts After Exposure to Very Low Dose of High Let Radiation

    NASA Technical Reports Server (NTRS)

    Hada, M.; George, K.; Chappell, L.; Cucinotta, F. A.

    2011-01-01

    The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivor with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (0.01 - 0.20 Gy) of 170 MeV/u Si-28 ions or 600 MeV/u Fe-56 ions, including doses where on average less than one direct ion traversal per cell nucleus occurs. Chromosomes were analyzed using the whole-chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving >2 breaks in 2 or more chromosomes). The responses for doses above 0.1 Gy (more than one ion traverses a cell) showed linear dose responses. However, for doses less than 0.1 Gy, both Si-28 ions and Fe-56 ions showed a dose independent response above background chromosome aberrations frequencies. Possible explanations for our results are non-targeted effects due to aberrant cell signaling [1], or delta-ray dose fluctuations [2] where a fraction of cells receive significant delta-ray doses due to the contributions of multiple ion tracks that do not directly traverse cell nuclei where chromosome aberrations are scored.

  16. Iron oxide bands in the visible and near-infrared reflectance spectra of primitive asteroids

    NASA Technical Reports Server (NTRS)

    Jarvis, Kandy S.; Vilas, Faith; Gaffey, Michael J.

    1993-01-01

    High resolution reflectance spectra of primitive asteroids (C, P, and D class and associated subclasses) have commonly revealed an absorption feature centered at 0.7 microns attributed to an Fe(2+)-Fe(3+) charge transfer transition in iron oxides and/or oxidized iron in phyllosilicates. A smaller feature identified at 0.43 microns has been attributed to an Fe(3+) spin-forbidden transition in iron oxides. In the spectra of the two main-belt primitive asteroids 368 Haidea (D) and 877 Walkure (F), weak absorption features which were centered near the location of 0.60-0.65 microns and 0.80-0.90 microns prompted a search for features at these wavelengths and an attempt to identify their origin(s). The CCD reflectance spectra obtained between 1982-1992 were reviewed for similar absorption features located near these wavelengths. The spectra of asteroids in which these absorption features have been identified are shown. These spectra are plotted in order of increasing heliocentric distance. No division of the asteroids by class has been attempted here (although the absence of these features in the anhydrous S-class asteroids, many of which have presumably undergone full heating and differentiation should be noted). For this study, each spectrum was treated as a continuum with discrete absorption features superimposed on it. For each object, a linear least squares fit to the data points defined a simple linear continuum. The linear continuum was then divided into each spectrum, thus removing the sloped continuum and permitting the intercomparison of residual spectral features.

  17. Electrostatic polymer-based microdeformable mirror for adaptive optics

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frederic; Conedera, Veronique; Granier, Hugues; Liotard, Arnaud; Lanzoni, Patrick; Salvagnac, Ludovic; Fabre, Norbert; Camon, Henri

    2007-02-01

    Future adaptive optics (AO) systems require deformable mirrors with very challenging parameters, up to 250 000 actuators and inter-actuator spacing around 500 μm. MOEMS-based devices are promising for the development of a complete generation of new deformable mirrors. Our micro-deformable mirror (MDM) is based on an array of electrostatic actuators with attachments to a continuous mirror on top. The originality of our approach lies in the elaboration of layers made of polymer materials. Mirror layers and active actuators have been demonstrated. Based on the design of this actuator and our polymer process, realization of a complete polymer-MDM has been done using two process flows: the first involves exclusively polymer materials while the second uses SU8 polymer for structural layers and SiO II and sol-gel for sacrificial layers. The latest shows a better capability in order to produce completely released structures. The electrostatic force provides a non-linear actuation, while AO systems are based on linear matrices operations. Then, we have developed a dedicated 14-bit electronics in order to "linearize" the actuation, using a calibration and a sixth-order polynomial fitting strategy. The response is nearly perfect over our 3×3 MDM prototype with a standard deviation of 3.5 nm; the influence function of the central actuator has been measured. First evaluation on the cross non-linarities has also been studied on OKO mirror and a simple look-up table is sufficient for determining the location of each actuator whatever the locations of the neighbor actuators. Electrostatic MDM are particularly well suited for open-loop AO applications.

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  19. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  20. Quantitative analysis of crystalline pharmaceuticals in powders and tablets by a pattern-fitting procedure using X-ray powder diffraction data.

    PubMed

    Yamamura, S; Momose, Y

    2001-01-16

    A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.

  1. The relationship of gross upper and lower limb motor competence to measures of health and fitness in adolescents aged 13-14 years.

    PubMed

    Weedon, Benjamin David; Liu, Francesca; Mahmoud, Wala; Metz, Renske; Beunder, Kyle; Delextrat, Anne; Morris, Martyn G; Esser, Patrick; Collett, Johnny; Meaney, Andy; Howells, Ken; Dawes, Helen

    2018-01-01

    Motor competence (MC) is an important factor in the development of health and fitness in adolescence. This cross-sectional study aims to explore the distribution of MC across school students aged 13-14 years old and the extent of the relationship of MC to measures of health and fitness across genders. A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13-14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=-2.21, degrees of freedom=307, P=0.03, 95% CI -0.253 to -0.011). Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. NCT02517333.

  2. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  3. Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques

    EPA Science Inventory

    Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...

  4. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  5. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1988-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  6. A New Metrics for Countries' Fitness and Products' Complexity

    NASA Astrophysics Data System (ADS)

    Tacchella, Andrea; Cristelli, Matthieu; Caldarelli, Guido; Gabrielli, Andrea; Pietronero, Luciano

    2012-10-01

    Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.

  7. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1990-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  8. A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2007-09-01

    Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.

  9. A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line

    NASA Technical Reports Server (NTRS)

    Otoshi, Tom Y.

    1994-01-01

    A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.

  10. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  11. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  12. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  13. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  14. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  15. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  16. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  17. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  18. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  19. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  20. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  1. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  2. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  3. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  4. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  5. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  6. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  7. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  8. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  9. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  10. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  11. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  12. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  13. An in-situ Raman study on pristane at high pressure and ambient temperature

    NASA Astrophysics Data System (ADS)

    Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei

    2018-01-01

    The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.

  14. Revision of laser-induced damage threshold evaluation from damage probability data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas

    2013-04-15

    In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less

  15. PyFDAP: automated analysis of fluorescence decay after photoconversion (FDAP) experiments.

    PubMed

    Bläßle, Alexander; Müller, Patrick

    2015-03-15

    We developed the graphical user interface PyFDAP for the fitting of linear and non-linear decay functions to data from fluorescence decay after photoconversion (FDAP) experiments. PyFDAP structures and analyses large FDAP datasets and features multiple fitting and plotting options. PyFDAP was written in Python and runs on Ubuntu Linux, Mac OS X and Microsoft Windows operating systems. The software, a user guide and a test FDAP dataset are freely available for download from http://people.tuebingen.mpg.de/mueller-lab. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference

    PubMed Central

    Park, Hyoung-Jun; Song, Minho

    2008-01-01

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898

  17. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  18. The relationship between health-related fitness and quality of life in postmenopausal women from Southern Taiwan

    PubMed Central

    Hsu, Wei-Hsiu; Chen, Chi-lung; Kuo, Liang Tseng; Fan, Chun-Hao; Lee, Mel S; Hsu, Robert Wen-Wei

    2014-01-01

    Background Health-related fitness has been reported to be associated with improved quality of life (QoL) in the elderly. Health-related fitness is comprised of several dimensions that could be enhanced by specific training regimens. It has remained unclear how various dimensions of health-related fitness interact with QoL in postmenopausal women. Objective The purpose of the current study was to investigate the relationship between the dimensions of health-related fitness and QoL in elderly women. Methods A cohort of 408 postmenopausal women in a rural area of Taiwan was prospectively collected. Dimensions of health-related fitness, consisting of muscular strength, balance, cardiorespiratory endurance, flexibility, muscle endurance, and agility, were assessed. QoL was determined using the Short Form Health Survey (SF-36). Differences between age groups (stratified by decades) were calculated using a one-way analysis of variance (ANOVA) and multiple comparisons using a Scheffé test. A Spearman’s correlation analysis was performed to examine differences between QoL and each dimension of fitness. Multiple linear regression with forced-entry procedure was performed to evaluate the effects of health-related fitness. A P-value of <0.05 was considered statistically significant. Results Age-related decreases in health-related fitness were shown for sit-ups, back strength, grip strength, side steps, trunk extension, and agility (P<0.05). An age-related decrease in QoL, specifically in physical functioning, role limitation due to physical problems, and physical component score, was also demonstrated (P<0.05). Multiple linear regression analyses demonstrated that back strength significantly contributed to the physical component of QoL (adjusted beta of 0.268 [P<0.05]). Conclusion Back strength was positively correlated with the physical component of QoL among the examined dimensions of health-related fitness. Health-related fitness, as well as the physical component of QoL, declined with increasing age. PMID:25258526

  19. ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox

    NASA Astrophysics Data System (ADS)

    Mede, Kyle; Brandt, Timothy D.

    2017-08-01

    ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.

  20. Biological growth functions describe published site index curves for Lake States timber species.

    Treesearch

    Allen L. Lundgren; William A. Dolid

    1970-01-01

    Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...

Top