Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
NASA Astrophysics Data System (ADS)
Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove
2018-02-01
We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Using Stocking or Harvesting to Reverse Period-Doubling Bifurcations in Discrete Population Models
James F. Selgrade
1998-01-01
This study considers a general class of 2-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Four sets of necessary...
James F. Selgrade; James H. Roberds
1998-01-01
This study considers a general class of two-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Conditions are derived...
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Wood, Phillip Karl; Jackson, Kristina M
2013-08-01
Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.
WOOD, PHILLIP KARL; JACKSON, KRISTINA M.
2014-01-01
Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389
Van Vlaenderen, Ilse; Van Bellinghen, Laure-Anne; Meier, Genevieve; Nautrup, Barbara Poulsen
2013-01-22
Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses.
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Limitations of inclusive fitness.
Allen, Benjamin; Nowak, Martin A; Wilson, Edward O
2013-12-10
Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M
2011-09-10
The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.
2013-01-01
Background Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Methods Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. Results The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. Conclusions This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses. PMID:23339290
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
A General Family of Limited Information Goodness-of-Fit Statistics for Multinomial Data
ERIC Educational Resources Information Center
Joe, Harry; Maydeu-Olivares, Alberto
2010-01-01
Maydeu-Olivares and Joe (J. Am. Stat. Assoc. 100:1009-1020, "2005"; Psychometrika 71:713-732, "2006") introduced classes of chi-square tests for (sparse) multidimensional multinomial data based on low-order marginal proportions. Our extension provides general conditions under which quadratic forms in linear functions of cell residuals are…
ERIC Educational Resources Information Center
Cheshire, Daniel C.
2017-01-01
The introduction to general topology represents a challenging transition for students of advanced mathematics. It requires the generalization of their previous understanding of ideas from fields like geometry, linear algebra, and real or complex analysis to fit within a more abstract conceptual system. Students must adopt a new lexicon of…
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Trait-fitness relationships determine how trade-off shapes affect species coexistence.
Ehrlich, Elias; Becks, Lutz; Gaedke, Ursula
2017-12-01
Trade-offs between functional traits are ubiquitous in nature and can promote species coexistence depending on their shape. Classic theory predicts that convex trade-offs facilitate coexistence of specialized species with extreme trait values (extreme species) while concave trade-offs promote species with intermediate trait values (intermediate species). We show here that this prediction becomes insufficient when the traits translate non-linearly into fitness which frequently occurs in nature, e.g., an increasing length of spines reduces grazing losses only up to a certain threshold resulting in a saturating or sigmoid trait-fitness function. We present a novel, general approach to evaluate the effect of different trade-off shapes on species coexistence. We compare the trade-off curve to the invasion boundary of an intermediate species invading the two extreme species. At this boundary, the invasion fitness is zero. Thus, it separates trait combinations where invasion is or is not possible. The invasion boundary is calculated based on measurable trait-fitness relationships. If at least one of these relationships is not linear, the invasion boundary becomes non-linear, implying that convex and concave trade-offs not necessarily lead to different coexistence patterns. Therefore, we suggest a new ecological classification of trade-offs into extreme-favoring and intermediate-favoring which differs from a purely mathematical description of their shape. We apply our approach to a well-established model of an empirical predator-prey system with competing prey types facing a trade-off between edibility and half-saturation constant for nutrient uptake. We show that the survival of the intermediate prey depends on the convexity of the trade-off. Overall, our approach provides a general tool to make a priori predictions on the outcome of competition among species facing a common trade-off in dependence of the shape of the trade-off and the shape of the trait-fitness relationships. © 2017 by the Ecological Society of America.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
García-Rubio, Javier; Olivares, Pedro R; Lopez-Legarrea, Patricia; Gómez-Campos, Rossana; Cossio-Bolaños, Marco A; Merellano-Navarro, Eugenio
2015-10-01
the objective of this study was to analyze the potential relationships between Health Related Quality of Life (HRQoL) with weight status, physical activity (PA) and fitness in Chilean adolescents in both, independent and combined analysis. a sample of 767 participants (47.5% females) and aged between 12 and 18 (mean age 15.5) was employed. All measurements were carried out using selfreported instruments and Kidscreen-10, iPAQ and IFIS were used to assess HRQoL, PA and Fitness respectively. One factor ANOVA and linear regression models were applied to analyze associations between HRQoL, weight status, PA and fitness using age and sex as confounders. body mass index, level of PA and fitness were independently associated with HRQoL in Chilean adolescents. However, the combined and adjusted by sex and age analysis of these associations showed that only the fitness was significantly related with HRQoL. general fitness is associated with HRQoL independently of sex, age, bodyweight status and level of PA. The relationship between nutritional status and weekly PA with HRQoL are mediated by sex, age and general fitness. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L
2014-01-01
Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
Code of Federal Regulations, 2011 CFR
2011-07-01
... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin
2017-02-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin
2016-01-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Variational and robust density fitting of four-center two-electron integrals in local metrics
NASA Astrophysics Data System (ADS)
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł
2008-09-01
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Variational and robust density fitting of four-center two-electron integrals in local metrics.
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł
2008-09-14
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G
2013-12-01
Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
Tight-binding study of stacking fault energies and the Rice criterion of ductility in the fcc metals
NASA Astrophysics Data System (ADS)
Mehl, Michael J.; Papaconstantopoulos, Dimitrios A.; Kioussis, Nicholas; Herbranson, M.
2000-02-01
We have used the Naval Research Laboratory (NRL) tight-binding (TB) method to calculate the generalized stacking fault energy and the Rice ductility criterion in the fcc metals Al, Cu, Rh, Pd, Ag, Ir, Pt, Au, and Pb. The method works well for all classes of metals, i.e., simple metals, noble metals, and transition metals. We compared our results with full potential linear-muffin-tin orbital and embedded atom method (EAM) calculations, as well as experiment, and found good agreement. This is impressive, since the NRL-TB approach only fits to first-principles full-potential linearized augmented plane-wave equations of state and band structures for cubic systems. Comparable accuracy with EAM potentials can be achieved only by fitting to the stacking fault energy.
Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Obreschkow, D.
2015-09-01
Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
General job stress: a unidimensional measure and its non-linear relations with outcome variables.
Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley
2012-04-01
This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Pang, Haowen; Sun, Xiaoyang; Yang, Bo; Wu, Jingbo
2018-05-01
To ensure good quality intensity-modulated radiation therapy (IMRT) planning, we proposed the use of a quality control method based on generalized equivalent uniform dose (gEUD) that predicts absorbed radiation doses in organs at risk (OAR). We conducted a retrospective analysis of patients who underwent IMRT for the treatment of cervical carcinoma, nasopharyngeal carcinoma (NPC), or non-small cell lung cancer (NSCLC). IMRT plans were randomly divided into data acquisition and data verification groups. OAR in the data acquisition group for cervical carcinoma and NPC were further classified as sub-organs at risk (sOAR). The normalized volume of sOAR and normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula. For NSCLC, the normalized intersection volume of the planning target volume (PTV) and lung, the maximum diameter of the PTV (left-right, anterior-posterior, and superior-inferior), and the normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula for the lung gEUD (a = 1). The r-squared and P values indicated that the fitting formula was a good fit. In the data verification group, IMRT plans verified the accuracy of the fitting formula, and compared the gEUD (a = 1) for each OAR between the subjective method and the gEUD-based method. In conclusion, the gEUD-based method can be used effectively for quality control and can reduce the influence of subjective factors on IMRT planning optimization. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Modelling the isometric force response to multiple pulse stimuli in locust skeletal muscle.
Wilson, Emma; Rustighi, Emiliano; Mace, Brian R; Newland, Philip L
2011-02-01
An improved model of locust skeletal muscle will inform on the general behaviour of invertebrate and mammalian muscle with the eventual aim of improving biomedical models of human muscles, embracing prosthetic construction and muscle therapy. In this article, the isometric response of the locust hind leg extensor muscle to input pulse trains is investigated. Experimental data was collected by stimulating the muscle directly and measuring the force at the tibia. The responses to constant frequency stimulus trains of various frequencies and number of pulses were decomposed into the response to each individual stimulus. Each individual pulse response was then fitted to a model, it being assumed that the response to each pulse could be approximated as an impulse response and was linear, no assumption were made about the model order. When the interpulse frequency (IPF) was low and the number of pulses in the train small, a second-order model provided a good fit to each pulse. For moderate IPF or for long pulse trains a linear third-order model provided a better fit to the response to each pulse. The fit using a second-order model deteriorated with increasing IPF. When the input comprised higher IPFs with a large number of pulses the assumptions that the response was linear could not be confirmed. A generalised model is also presented. This model is second-order, and contains two nonlinear terms. The model is able to capture the force response to a range of inputs. This includes cases where the input comprised of higher frequency pulse trains and the assumption of quasi-linear behaviour could not be confirmed.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
A generalized multivariate regression model for modelling ocean wave heights
NASA Astrophysics Data System (ADS)
Wang, X. L.; Feng, Y.; Swail, V. R.
2012-04-01
In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.
Are non-linearity effects of absorption important for MAX-DOAS observations?
NASA Astrophysics Data System (ADS)
Pukite, Janis; Wang, Yang; Wagner, Thomas
2017-04-01
For scattered light observations the absorption optical depth depends non-linearly on the trace gas concentrations if their absorption is strong. This is the case because the Beer-Lambert law is generally not applicable for scattered light measurements due to many (i.e. more than one) light paths contributing to the measurement. While in many cases a linear approximation can be made, for scenarios with strong absorption non-linear effects cannot always be neglected. This is especially the case for observation geometries with spatially extended and diffuse light paths, especially in satellite limb geometry but also for nadir measurements as well. Fortunately the effects of non-linear effects can be quantified by means of expanding the radiative transfer equation in a Taylor series with respect to the trace gas absorption coefficients. Herewith if necessary (1) the higher order absorption structures can be described as separate fit parameters in the DOAS fit and (2) the algorithm constraints of retrievals of VCDs and profiles can be improved by considering higher order sensitivity parameters. In this study we investigate the contribution of the higher order absorption structures for MAX-DOAS observation geometry for different atmospheric and ground properties (cloud and aerosol effects, trace gas amount, albedo) and geometry (different Sun and viewing angles).
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.
Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe
2015-01-15
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Effect Size Measure and Analysis of Single Subject Designs
ERIC Educational Resources Information Center
Society for Research on Educational Effectiveness, 2013
2013-01-01
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
NASA Astrophysics Data System (ADS)
Puķīte, Jānis; Wagner, Thomas
2016-05-01
We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
The long-solved problem of the best-fit straight line: application to isotopic mixing lines
NASA Astrophysics Data System (ADS)
Wehr, Richard; Saleska, Scott R.
2017-01-01
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introduce the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods - ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) - have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general - and convenient - solution is always the least biased.
LIGO GW150914 and GW151226 gravitational wave detection and generalized gravitation theory (MOG)
NASA Astrophysics Data System (ADS)
Moffat, J. W.
2016-12-01
The nature of gravitational waves in a generalized gravitation theory is investigated. The linearized field equations and the metric tensor quadrupole moment power and the decrease in radius of an inspiralling binary system of two compact objects are derived. The generalized Kerr metric describing a spinning black hole is determined by its mass M and the spin parameter a = cS / GM2. The LIGO-Virgo collaboration data is fitted with smaller binary black hole masses in agreement with the current electromagnetic, observed X-ray binary upper bound for a black hole mass, M ≲ 10M⊙.
ERIC Educational Resources Information Center
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Doona, Christopher J; Feeherry, Florence E; Ross, Edward W
2005-04-15
Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.
Trapé, Átila Alexandre; Marques, Renato Francisco Rodrigues; Lizzi, Elisângela Aparecida da Silva; Yoshimura, Fernando Eidi; Franco, Laercio Joel; Zago, Anderson Saranz
2017-01-01
To investigate the association between both demographic and socioeconomic conditions with physical fitness and regular practice of physical exercises in participants of community projects, supervised by a physical education teacher. This enabled to investigate whether the adoption of an active lifestyle depends only on the personal choice or has any influence of socioeconomic factors. 213 individuals aged over 50 years joined the study, and provided information about their socioeconomic status (age, gender, education/years of study, and income); usual level of physical activity (ULPA); and physical fitness, by a physical battery tests which allowed the calculation of general functional fitness index (GFFI). The generalized linear model showed that participants ranked in the highest GFFI groups (good and very good) had more years of study and higher income (p < 0.05). The multiple linear regression model complements the previous analysis, demonstrating the magnitude of the change in the GFFI in association with the years of study (group > 15), income (all groups) and age (p < 0.05). By means of analysis of variance, a difference between the groups was verified and longer practice of exercises (> 6 months) were also associated with education and income (p < 0.05); among the groups with exercise practice whether greater than or equal to six months, that supervised showed better results in the GFFI (p < 0.05). The association between variables strengthens the hypothesis that adherence and maintenance of physical exercise might not be only dependent of individual's choice, but also the socioeconomic factors, which can influence the choice for any active lifestyle.
Potential pitfalls when denoising resting state fMRI data using nuisance regression.
Bright, Molly G; Tench, Christopher R; Murphy, Kevin
2017-07-01
In resting state fMRI, it is necessary to remove signal variance associated with noise sources, leaving cleaned fMRI time-series that more accurately reflect the underlying intrinsic brain fluctuations of interest. This is commonly achieved through nuisance regression, in which the fit is calculated of a noise model of head motion and physiological processes to the fMRI data in a General Linear Model, and the "cleaned" residuals of this fit are used in further analysis. We examine the statistical assumptions and requirements of the General Linear Model, and whether these are met during nuisance regression of resting state fMRI data. Using toy examples and real data we show how pre-whitening, temporal filtering and temporal shifting of regressors impact model fit. Based on our own observations, existing literature, and statistical theory, we make the following recommendations when employing nuisance regression: pre-whitening should be applied to achieve valid statistical inference of the noise model fit parameters; temporal filtering should be incorporated into the noise model to best account for changes in degrees of freedom; temporal shifting of regressors, although merited, should be achieved via optimisation and validation of a single temporal shift. We encourage all readers to make simple, practical changes to their fMRI denoising pipeline, and to regularly assess the appropriateness of the noise model used. By negotiating the potential pitfalls described in this paper, and by clearly reporting the details of nuisance regression in future manuscripts, we hope that the field will achieve more accurate and precise noise models for cleaning the resting state fMRI time-series. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
Physical fitness reference standards in fibromyalgia: The al-Ándalus project.
Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B
2017-11-01
We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime
2013-02-01
The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller areas glued are better); and (c) the variation in the thickness of the specimen. The specimen's faces should be parallel to ensure a uniaxial stress state. However, this is not possible in real specimens, and a criterion must be defined to accept the specimen in terms of the specimen's thickness variation and the deviation of the fitted constants arising from such a variation.
Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth
2007-01-01
A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.
NASA Astrophysics Data System (ADS)
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-01
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.
Kumar, K Vasanth; Sivanesan, S
2006-08-25
Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.
The long-solved problem of the best-fit straight line: Application to isotopic mixing lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehr, Richard; Saleska, Scott R.
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introducemore » the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods – ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) – have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Here, using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general – and convenient – solution is always the least biased.« less
The long-solved problem of the best-fit straight line: Application to isotopic mixing lines
Wehr, Richard; Saleska, Scott R.
2017-01-03
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introducemore » the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods – ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) – have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Here, using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general – and convenient – solution is always the least biased.« less
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
Zarr, Noah; Alexander, William H.; Brown, Joshua W.
2014-01-01
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
A nonlinear model for analysis of slug-test data
McElwee, C.D.; Zenner, M.A.
1998-01-01
While doing slug tests in high-permeability aquifers, we have consistently seen deviations from the expected response of linear theoretical models. Normalized curves do not coincide for various initial heads, as would be predicted by linear theories, and are shifted to larger times for higher initial heads. We have developed a general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the well bore, and a Hvorslev model for the aquifer, which explains these data features. The model produces a very good fit for both oscillatory and nonoscillatory field data, using a single set of physical parameters to predict the field data for various initial displacements at a given well. This is in contrast to linear models which have a systematic lack of fit and indicate that hydraulic conductivity varies with the initial displacement. We recommend multiple slug tests with a considerable variation in initial head displacement to evaluate the possible presence of nonlinear effects. Our conclusion is that the nonlinear model presented here is an excellent tool to analyze slug tests, covering the range from the underdamped region to the overdamped region.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Karnoe, Astrid; Furstrand, Dorthe; Christensen, Karl Bang; Norgaard, Ole; Kayser, Lars
2018-05-10
To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user's eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes. The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals' health literacy and digital literacy using a mix of existing and newly developed scales. From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: "functional health literacy," tool 2: "health literacy self-assessment," tool 3: "familiarity with health and health care," and tool 4: "knowledge of health and disease") and 3 digitally-related tools (tool 5: "technology familiarity," tool 6: "technology confidence," and tool 7: "incentives for engaging with technology") that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity. Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90. The eHLA consists of 7 short, robust scales that assess individual's knowledge and skills related to digital literacy and health literacy. ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.
Testing approximations for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.
Cyclotron resonance in bilayer graphene.
Henriksen, E A; Jiang, Z; Tung, L-C; Schwartz, M E; Takita, M; Wang, Y-J; Kim, P; Stormer, H L
2008-02-29
We present the first measurements of cyclotron resonance of electrons and holes in bilayer graphene. In magnetic fields up to B=18 T, we observe four distinct intraband transitions in both the conduction and valence bands. The transition energies are roughly linear in B between the lowest Landau levels, whereas they follow square root[B] for the higher transitions. This highly unusual behavior represents a change from a parabolic to a linear energy dispersion. The density of states derived from our data generally agrees with the existing lowest order tight binding calculation for bilayer graphene. However, in comparing data to theory, a single set of fitting parameters fails to describe the experimental results.
Visual Tracking Using 3D Data and Region-Based Active Contours
2016-09-28
adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model
NASA Astrophysics Data System (ADS)
Zhai, Yu; Li, Hui; Le Roy, Robert J.
2016-06-01
A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kunutsor, Setor K; Laukkanen, Tanjaniina; Laukkanen, Jari A
2017-10-01
Cardiorespiratory fitness (CRF), an index of cardiac and respiratory functioning, is strongly associated with a reduced risk of adverse health outcomes. We aimed to assess the prospective association of CRF with the risk of respiratory diseases (defined as chronic obstructive pulmonary disease, pneumonia, or asthma). Cardiorespiratory fitness, as measured by maximal oxygen uptake, was assessed in 1974 middle-aged men. During a median follow-up of 25.7 years, 382 hospital diagnosed respiratory diseases were recorded. Cardiorespiratory fitness was linearly associated with risk of respiratory diseases. In analysis adjusted for several established and potential risk factors, the hazard ratio (HR) (95% CI) for respiratory diseases was 0.63 (0.45-0.88), when comparing extreme quartiles of CRF levels. The corresponding multivariate adjusted HR (95% CI) for pneumonia was 0.67 (0.48-0.95). Our findings indicate a graded inverse and independent association between CRF and the future risk of respiratory diseases in a general male Caucasian population.
Cho, Jaehee; Park, Dong Jin; Ordonez, Zoa
2013-11-01
The main goal of this study was to assess how the millennial generation perceives companies that have different social media policies and how such perception influences key variables for job-seeking behaviors, including perceived person-organization fit (POF), organizational attraction, and job pursuit intention. Results from a univariate general linear model and path analysis supported all of the established hypotheses. In particular, the results revealed that millennials perceived higher POF for a company with organizational policies supporting employees' social media use. Further, organizational attractiveness significantly mediated the relationship between communication-oriented POF and job pursuit intention.
Kumar, K Vasanth
2006-08-21
The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2015-02-21
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-20
We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708
We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
D'Agostino, Emily M; Day, Sophia E; Konty, Kevin J; Larkin, Michael; Saha, Subir; Wyka, Katarzyna
2018-03-23
Extensive research demonstrates the benefits of fitness on children's health and academic performance. Although decreases in health-related fitness may increase school absenteeism, multiple years of prospective, child-level data are needed to examine whether fitness changes predict subsequent chronic absenteeism status. Six cohorts of New York City public school students were followed from grades 5-8 (2006/2007-2012/2013; N = 349,381). A longitudinal 3-level logistic generalized linear mixed model with random intercepts was used to test the association of individual children's changes in fitness and 1-year lagged chronic absenteeism. The odds of chronic absenteeism increased 27% [odds ratio (OR) 95% confidence interval (CI), 1.25-1.30], 15% (OR 95% CI, 1.13-1.18), 9% (OR 95% CI, 1.07-1.11), and 1% (OR 95% CI, 0.98-1.04), for students who had a >20% decrease, 10%-20% decrease, <10% increase or decrease, and 10%-20% increase in fitness, respectively, compared with >20% fitness increase. These findings contribute important longitudinal evidence to a cross-sectional literature, demonstrating reductions in youth fitness may increase absenteeism. Given only 25% of youth aged 12-15 years achieve the recommended daily 60 minutes or more of moderate to vigorous physical activity, future work should examine the potential for youth fitness interventions to reduce absenteeism and foster positive attitudes toward lifelong physical activity.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
NASA Astrophysics Data System (ADS)
Yeung, Yau Yuen; Tanner, Peter A.
2013-12-01
The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Spectral embedding finds meaningful (relevant) structure in image and microarray data
Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L
2006-01-01
Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359
Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya
2014-09-01
Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.
Building generalized inverses of matrices using only row and column operations
NASA Astrophysics Data System (ADS)
Stuart, Jeffrey
2010-12-01
Most students complete their first and only course in linear algebra with the understanding that a real, square matrix A has an inverse if and only if rref(A), the reduced row echelon form of A, is the identity matrix I n . That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix [A | I n ] to obtain [rref(A) | P], then the matrix A is invertible exactly when rref(A) = I n , in which case, P = A -1. Many students must wonder what happens when A is not invertible, and what information P conveys in that case. That question is, however, seldom answered in a first course. We show that investigating that question emphasizes the close relationships between matrix multiplication, elementary row operations, linear systems, and the four fundamental spaces associated with a matrix. More important, answering that question provides an opportunity to show students how mathematicians extend results by relaxing hypotheses and then exploring the strengths and limitations of the resulting generalization, and how the first relaxation found is often not the best relaxation to be found. Along the way, we introduce students to the basic properties of generalized inverses. Finally, our approach should fit within the time and topic constraints of a first course in linear algebra.
Fitting and forecasting coupled dark energy in the non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, Santiago; Amendola, Luca; Pettorino, Valeria
2016-01-01
We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less
Determination of time zero from a charged particle detector
Green, Jesse Andrew [Los Alamos, NM
2011-03-15
A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gómez-Valent, Adrià; Karimkhani, Elahe; Solà, Joan, E-mail: adriagova@ecm.ub.edu, E-mail: e.karimkhani91@basu.ac.ir, E-mail: sola@ecm.ub.edu
We determine the Hubble expansion and the general cosmic perturbation equations for a general system consisting of self-conserved matter, ρ{sub m}, and self-conserved dark energy (DE), ρ{sub D}. While at the background level the two components are non-interacting, they do interact at the perturbations level. We show that the coupled system of matter and DE perturbations can be transformed into a single, third order, matter perturbation equation, which reduces to the (derivative of the) standard one in the case that the DE is just a cosmological constant. As a nontrivial application we analyze a class of dynamical models whose DEmore » density ρ{sub D}(H) consists of a constant term, C{sub 0}, and a series of powers of the Hubble rate. These models were previously analyzed from the point of view of dynamical vacuum models, but here we treat them as self-conserved DE models with a dynamical equation of state. We fit them to the wealth of expansion history and linear structure formation data and compare their fit quality with that of the concordance ΛCDM model. Those with C{sub 0}=0 include the so-called ''entropic-force'' and ''QCD-ghost'' DE models, as well as the pure linear model ρ{sub D}∼H, all of which appear strongly disfavored. The models with C{sub 0}≠0 , in contrast, emerge as promising dynamical DE candidates whose phenomenological performance is highly competitive with the rigid Λ-term inherent to the ΛCDM.« less
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Ichikawa, Shintaro; Motosugi, Utaroh; Hernando, Diego; Morisaka, Hiroyuki; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi
2018-04-10
To compare the abilities of three intravoxel incoherent motion (IVIM) imaging approximation methods to discriminate the histological grade of hepatocellular carcinomas (HCCs). Fifty-eight patients (60 HCCs) underwent IVIM imaging with 11 b-values (0-1000 s/mm 2 ). Slow (D) and fast diffusion coefficients (D * ) and the perfusion fraction (f) were calculated for the HCCs using the mean signal intensities in regions of interest drawn by two radiologists. Three approximation methods were used. First, all three parameters were obtained simultaneously using non-linear fitting (method A). Second, D was obtained using linear fitting (b = 500 and 1000), followed by non-linear fitting for D * and f (method B). Third, D was obtained by linear fitting, f was obtained using the regression line intersection and signals at b = 0, and non-linear fitting was used for D * (method C). A receiver operating characteristic analysis was performed to reveal the abilities of these methods to distinguish poorly-differentiated from well-to-moderately-differentiated HCCs. Inter-reader agreements were assessed using intraclass correlation coefficients (ICCs). The measurements of D, D * , and f in methods B and C (Az-value, 0.658-0.881) had better discrimination abilities than did those in method A (Az-value, 0.527-0.607). The ICCs of D and f were good to excellent (0.639-0.835) with all methods. The ICCs of D * were moderate with methods B (0.580) and C (0.463) and good with method A (0.705). The IVIM parameters may vary depending on the fitting methods, and therefore, further technical refinement may be needed.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
Wang, Jye; Lin, Wender; Chang, Ling-Hui
2018-01-01
The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.
Predicting phenotypes of asthma and eczema with machine learning
2014-01-01
Background There is increasing recognition that asthma and eczema are heterogeneous diseases. We investigated the predictive ability of a spectrum of machine learning methods to disambiguate clinical sub-groups of asthma, wheeze and eczema, using a large heterogeneous set of attributes in an unselected population. The aim was to identify to what extent such heterogeneous information can be combined to reveal specific clinical manifestations. Methods The study population comprised a cross-sectional sample of adults, and included representatives of the general population enriched by subjects with asthma. Linear and non-linear machine learning methods, from logistic regression to random forests, were fit on a large attribute set including demographic, clinical and laboratory features, genetic profiles and environmental exposures. Outcome of interest were asthma, wheeze and eczema encoded by different operational definitions. Model validation was performed via bootstrapping. Results The study population included 554 adults, 42% male, 38% previous or current smokers. Proportion of asthma, wheeze, and eczema diagnoses was 16.7%, 12.3%, and 21.7%, respectively. Models were fit on 223 non-genetic variables plus 215 single nucleotide polymorphisms. In general, non-linear models achieved higher sensitivity and specificity than other methods, especially for asthma and wheeze, less for eczema, with areas under receiver operating characteristic curve of 84%, 76% and 64%, respectively. Our findings confirm that allergen sensitisation and lung function characterise asthma better in combination than separately. The predictive ability of genetic markers alone is limited. For eczema, new predictors such as bio-impedance were discovered. Conclusions More usefully-complex modelling is the key to a better understanding of disease mechanisms and personalised healthcare: further advances are likely with the incorporation of more factors/attributes and longitudinal measures. PMID:25077568
Polynomials to model the growth of young bulls in performance tests.
Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B
2014-03-01
The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.
Recognizing Banknote Fitness with a Visible Light One Dimensional Line Image Sensor
Pham, Tuyen Danh; Park, Young Ho; Kwon, Seung Yong; Nguyen, Dat Tien; Vokhidov, Husan; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo
2015-01-01
In general, dirty banknotes that have creases or soiled surfaces should be replaced by new banknotes, whereas clean banknotes should be recirculated. Therefore, the accurate classification of banknote fitness when sorting paper currency is an important and challenging task. Most previous research has focused on sensors that used visible, infrared, and ultraviolet light. Furthermore, there was little previous research on the fitness classification for Indian paper currency. Therefore, we propose a new method for classifying the fitness of Indian banknotes, with a one-dimensional line image sensor that uses only visible light. The fitness of banknotes is usually determined by various factors such as soiling, creases, and tears, etc. although we just consider banknote soiling in our research. This research is novel in the following four ways: first, there has been little research conducted on fitness classification for the Indian Rupee using visible-light images. Second, the classification is conducted based on the features extracted from the regions of interest (ROIs), which contain little texture. Third, 1-level discrete wavelet transformation (DWT) is used to extract the features for discriminating between fit and unfit banknotes. Fourth, the optimal DWT features that represent the fitness and unfitness of banknotes are selected based on linear regression analysis with ground-truth data measured by densitometer. In addition, the selected features are used as the inputs to a support vector machine (SVM) for the final classification of banknote fitness. Experimental results showed that our method outperforms other methods. PMID:26343654
Efficacy of Toric Contact Lenses in Fitting and Patient-Reported Outcomes in Contact Lens Wearers.
Cox, Stephanie M; Berntsen, David A; Bickle, Katherine M; Mathew, Jessica H; Powell, Daniel R; Little, B Kim; Lorenz, Kathrine Osborn; Nichols, Jason J
2018-06-05
To assess whether patient-reported measures are improved with soft toric contact lenses (TCLs) compared with soft spherical contact lenses (SCLs) and whether clinical time needed to fit TCL is greater than SCL. Habitual contact lens wearers with vertexed spherical refraction +4.00 to +0.25 D or -0.50 to -9.00 D and cylinder -0.75 to -1.75 DC were randomly assigned to be binocularly fitted into a TCL or SCL, and masked to treatment assignment. Time to successful fit was recorded. After 5 days, the National Eye Institute Refractive Error Quality of Life Instrument (NEI-RQL-42) and modified Convergence Insufficiency Symptom Survey (CISS) were completed. After washout, subjects were fit into the alternative lens design (TCL or SCL). Outcomes were evaluated using linear mixed models for the time to fit and CISS score, generalized linear model for the successful fit, and Wilcoxon tests for the NEI-RQL-42. Sixty subjects (71.7% women, mean age [±SD] = 27.5±5.0 years) completed the study. The mean time to fit the TCL was 10.2±4.3 and 9.0±6.5 min for the SCL (least square [LS] mean difference (TCL-SCL)=1.2, P=0.22). Toric contact lens scored better than SCL in global NEI-RQL-42 score (P=0.006) and the clarity of vision (P=0.006) and satisfaction with correction subscales (P=0.006). CISS showed a 15% reduction in symptoms (LS mean difference [TCL-SCL]=-2.20, P=0.02). TCLs are a good option when trying to improve the vision of patients with low-to-moderate astigmatism given the subjective improvements in outcomes.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Variability simulations with a steady, linearized primitive equations model
NASA Technical Reports Server (NTRS)
Kinter, J. L., III; Nigam, S.
1985-01-01
Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.
Babaei, Behzad; Velasquez-Mao, Aaron J; Thomopoulos, Stavros; Elson, Elliot L; Abramowitch, Steven D; Genin, Guy M
2017-05-01
The time- and frequency-dependent properties of connective tissue define their physiological function, but are notoriously difficult to characterize. Well-established tools such as linear viscoelasticity and the Fung quasi-linear viscoelastic (QLV) model impose forms on responses that can mask true tissue behavior. Here, we applied a more general discrete quasi-linear viscoelastic (DQLV) model to identify the static and dynamic time- and frequency-dependent behavior of rabbit medial collateral ligaments. Unlike the Fung QLV approach, the DQLV approach revealed that energy dissipation is elevated at a loading period of ∼10s. The fitting algorithm was applied to the entire loading history on each specimen, enabling accurate estimation of the material's viscoelastic relaxation spectrum from data gathered from transient rather than only steady states. The application of the DQLV method to cyclically loading regimens has broad applicability for the characterization of biological tissues, and the results suggest a mechanistic basis for the stretching regimens most favored by athletic trainers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Babaei, Behzad; Velasquez-Mao, Aaron J.; Thomopoulos, Stavros; Elson, Elliot L.; Abramowitch, Steven D.; Genin, Guy M.
2017-01-01
The time- and frequency-dependent properties of connective tissue define their physiological function, but are notoriously difficult to characterize. Well-established tools such as linear viscoelasticity and the Fung quasi-linear viscoelastic (QLV) model impose forms on responses that can mask true tissue behavior. Here, we applied a more general discrete quasi-linear viscoelastic (DQLV) model to identify the static and dynamic time- and frequency-dependent behavior of rabbit medial collateral ligaments. Unlike the Fung QLV approach, the DQLV approach revealed that energy dissipation is elevated at a loading period of ~10 seconds. The fitting algorithm was applied to the entire loading history on each specimen, enabling accurate estimation of the material's viscoelastic relaxation spectrum from data gathered from transient rather than only steady states. The application of the DQLV method to cyclically loading regimens has broad applicability for the characterization of biological tissues, and the results suggest a mechanistic basis for the stretching regimens most favored by athletic trainers. PMID:28088071
Gerrard, Paul
2012-10-01
To determine whether there is a relationship between the level of education and the accuracy of self-reported physical activity as a proxy measure of aerobic fitness. Data from the National Health and Nutrition Examination from the years 1999 to 2004 were used. Linear regression was performed for measured maximum oxygen consumption (Vo(2)max) versus self-reported physical activity for 5 different levels of education. This was a national survey in the United States. Participants included adults from the general U.S. population (N=3290). None. Coefficients of determination obtained from models for each education level were used to compare how well self-reported physical activity represents cardiovascular fitness. These coefficients were the main outcome measure. Coefficients of determination for Vo(2)max versus reported physical activity increased as the level of education increased. In this preliminary study, self-reported physical activity is a better proxy measure for aerobic fitness in highly educated individuals than in poorly educated individuals. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
NASA Astrophysics Data System (ADS)
Dullo, Bililign T.; Graham, Alister W.
2014-11-01
New surface brightness profiles from 26 early-type galaxies with suspected partially depleted cores have been extracted from the full radial extent of Hubble Space Telescope images. We have carefully quantified the radial stellar distributions of the elliptical galaxies using the core-Sérsic model whereas for the lenticular galaxies a core-Sérsic bulge plus an exponential disc model gives the best representation. We additionally caution about the use of excessive multiple Sérsic functions for decomposing galaxies and compare with past fits in the literature. The structural parameters obtained from our fitted models are, in general, in good agreement with our initial study using radially limited (R ≲ 10 arcsec) profiles, and are used here to update several `central' as well as `global' galaxy scaling relations. We find near-linear relations between the break radius Rb and the spheroid luminosity L such that Rb ∝ L1.13±0.13, and with the supermassive black hole mass MBH such that R_b∝ M_BH^{0.83 ± 0.21}. This is internally consistent with the notion that major, dry mergers add the stellar and black hole mass in equal proportion, i.e. MBH ∝ L. In addition, we observe a linear relation R_b∝ R_e^{0.98 ± 0.15} for the core-Sérsic elliptical galaxies - where Re is the galaxies' effective half-light radii - which is collectively consistent with the approximately linear, bright-end of the curved L-Re relation. Finally, we measure accurate stellar mass deficits Mdef that are in general 0.5-4 MBH, and we identify two galaxies (NGC 1399, NGC 5061) that, due to their high Mdef/MBH ratio, may have experienced oscillatory core-passage by a (gravitational radiation)-kicked black hole. The galaxy scaling relations and stellar mass deficits favour core-Sérsic galaxy formation through a few `dry' major merger events involving supermassive black holes such that M_def ∝ M_BH^{3.70 ± 0.76}, for MBH ≳ 2 × 108 M⊙.
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.
1980-10-01
IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc
NASA Astrophysics Data System (ADS)
Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.
2015-05-01
Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.
The entrainment matrix of a superfluid nucleon mixture at finite temperatures
NASA Astrophysics Data System (ADS)
Leinson, Lev B.
2018-06-01
It is considered a closed system of non-linear equations for the entrainment matrix of a non-relativistic mixture of superfluid nucleons at arbitrary temperatures below the onset of neutron superfluidity, which takes into account the essential dependence of the superfluid energy gap in the nucleon spectra on the velocities of superfluid flows. It is assumed that the protons condense into the isotropic 1S0 state, and the neutrons are paired into the spin-triplet 3P2 state. It is derived an analytic solution to the non-linear equations for the entrainment matrix under temperatures just below the critical value for the neutron superfluidity onset. In general case of an arbitrary temperature of the superfluid mixture the non-linear equations are solved numerically and fitted by simple formulas convenient for a practical use with an arbitrary set of the Landau parameters.
Synchrotron speciation data for zero-valent iron nanoparticles
This data set encompasses a complete analysis of synchrotron speciation data for 5 iron nanoparticle samples (P1, P2, P3, S1, S2, and metallic iron) to include linear combination fitting results (Table 6 and Figure 9) and ab-initio extended x-ray absorption fine structure spectroscopy fitting (Figure 10 and Table 7).Table 6: Linear combination fitting of the XAS data for the 5 commercial nZVI/ZVI products tested. Species proportions are presented as percentages. Goodness of fit is indicated by the chi^2 value.Figure 9: Normalised Fe K-edge k3-weighted EXAFS of the 5 commercial nZVI/ZVIproducts tested. Dotted lines show the best 4-component linear combination fit ofreference spectra.Figure 10: Fourier transformed radial distribution functions (RDFs) of the five samplesand an iron metal foil. The black lines in Fig. 10 represent the sample data and the reddotted curves represent the non-linear fitting results of the EXAFS data.Table 7: Coordination parameters of Fe in the samples.This dataset is associated with the following publication:Chekli, L., B. Bayatsarmadi, R. Sekine, B. Sarkar, A. Maoz Shen, K. Scheckel , W. Skinner, R. Naidu, H. Shon, E. Lombi, and E. Donner. Analytical Characterisation of Nanoscale Zero-Valent Iron: A Methodological Review. Richard P. Baldwin ANALYTICA CHIMICA ACTA. Elsevier Science Ltd, New York, NY, USA, 903: 13-35, (2016).
Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine
NASA Astrophysics Data System (ADS)
Santoso, Noviyanti; Wibowo, Wahyu
2018-03-01
A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.
Kumar, K Vasanth; Sivanesan, S
2005-08-31
Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
Rank-based methods for modeling dependence between loss triangles.
Côté, Marie-Pier; Genest, Christian; Abdallah, Anas
2016-01-01
In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
Interaction Models for Functional Regression.
Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab
2016-02-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.
Optimization with Fuzzy Data via Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Kosiński, Witold
2010-09-01
Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.
Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region
NASA Astrophysics Data System (ADS)
Mazurek, Grzegorz; Iwański, Marek
2017-10-01
Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Four Theorems on the Psychometric Function
May, Keith A.; Solomon, Joshua A.
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, . This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull “slope” parameter, , can be approximated by , where is the of the Weibull function that fits best to the cumulative noise distribution, and depends on the transducer. We derive general expressions for and , from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when , . We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4–0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian. PMID:24124456
NASA Astrophysics Data System (ADS)
Hutterer, Rudi
2018-01-01
The author discusses methods for the fluorometric determination of affinity constants by linear and nonlinear fitting methods. This is outlined in particular for the interaction between cyclodextrins and several anesthetic drugs including benzocaine. Special emphasis is given to the limitations of certain fits, and the impact of such studies on enzyme-substrate interactions are demonstrated. Both the experimental part and methods of analysis are well suited for students in an advanced lab.
On Least Squares Fitting Nonlinear Submodels.
ERIC Educational Resources Information Center
Bechtel, Gordon G.
Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Kaufmann, Eric; Levene, Mark; Loizou, George
Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into social phenomena. Contextual and selection effects tend to produce extreme values in the tails of rank-ordered distributions of both census data and district-level election outcomes. Models that account for this nonlinearity generally outperform linear models. Fitting nonlinear functions based on rank-ordering census and election data therefore improves the fit of aggregate voting models. This may help improve ecological inference, as well as election forecasting in majoritarian systems. We propose a generative multiplicative decrease model that gives rise to a rank-order distribution and facilitates the analysis of the recent UK EU referendum results. We supply empirical evidence that the beta-like survival function, which can be generated directly from our model, is a close fit to the referendum results, and also may have predictive value when covariate data are available.
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity
Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut
2015-01-01
Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235
Yu, Kyung-Hun; Suk, Min-Hwa; Kang, Shin-Woo; Shin, Yun-A
2014-10-01
The purpose of this study was to investigate the effect of combined linear and nonlinear periodic training on physical fitness and competition times in finswimmers. The linear resistance training model (6 days/week) and nonlinear underwater training (4 days/week) were applied to 12 finswimmers (age, 16.08± 1.44 yr; career, 3.78± 1.90 yr) for 12 weeks. Body composition measures included weight, body mass index (BMI), percent fat, and fat-free mass. Physical fitness measures included trunk flexion forward, trunk extension backward, sargent jump, 1-repetition-maximum (1 RM) squat, 1 RM dead lift, knee extension, knee flexion, trunk extension, trunk flexion, and competition times. Body composition and physical fitness were improved after the 12-week periodic training program. Weight, BMI, and percent fat were significantly decreased, and trunk flexion forward, trunk extension backward, sargent jump, 1 RM squat, 1 RM dead lift, and knee extension (right) were significantly increased. The 50- and 100-m times significantly decreased in all 12 athletes. After 12 weeks of training, all finswimmers who participated in this study improved their times in a public competition. These data indicate that combined linear and nonlinear periodic training enhanced the physical fitness and competition times in finswimmers.
Individual differences in long-range time representation.
Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi
2017-04-01
On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.
Wen, Cheng; Dallimer, Martin; Carver, Steve; Ziv, Guy
2018-05-06
Despite the great potential of mitigating carbon emission, development of wind farms is often opposed by local communities due to the visual impact on landscape. A growing number of studies have applied nonmarket valuation methods like Choice Experiments (CE) to value the visual impact by eliciting respondents' willingness to pay (WTP) or willingness to accept (WTA) for hypothetical wind farms through survey questions. Several meta-analyses have been found in the literature to synthesize results from different valuation studies, but they have various limitations related to the use of the prevailing multivariate meta-regression analysis. In this paper, we propose a new meta-analysis method to establish general functions for the relationships between the estimated WTP or WTA and three wind farm attributes, namely the distance to residential/coastal areas, the number of turbines and turbine height. This method involves establishing WTA or WTP functions for individual studies, fitting the average derivative functions and deriving the general integral functions of WTP or WTA against wind farm attributes. Results indicate that respondents in different studies consistently showed increasing WTP for moving wind farms to greater distances, which can be fitted by non-linear (natural logarithm) functions. However, divergent preferences for the number of turbines and turbine height were found in different studies. We argue that the new analysis method proposed in this paper is an alternative to the mainstream multivariate meta-regression analysis for synthesizing CE studies and the general integral functions of WTP or WTA against wind farm attributes are useful for future spatial modelling and benefit transfer studies. We also suggest that future multivariate meta-analyses should include non-linear components in the regression functions. Copyright © 2018. Published by Elsevier B.V.
[Equilibrium sorption isotherm for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum].
Yan, Chang-zhou; Zeng, A-yan; Jin, Xiang-can; Wang, Sheng-rui; Xu, Qiu-jin; Zhao, Jing-zhu
2006-06-01
Equilibrium sorption isotherms for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum were studied. Both methods of linear and non-linear fitting were applied to describe the sorption isotherms, and their applicability were analyzed and compared. The results were: (1) The applicability of simulated equation can't be compared only by R2 and chi2 when equilibrium sorption model was used to quantify and contrast the performance of different biosorbents. Both methods of linear and non-linear fitting can be applied in different fitting equations to describe the equilibrium sorption isotherms respectively in order to obtain the actual and credible fitting results, and the fitting equation best accorded with experimental data can be selected; (2) In this experiment, the Langmuir model is more suitable to describe the sorption isotherm of Cu2+ biosorption by H. verticillata and M. spicatum, and there is greater difference between the experimental data and the calculated value of Freundlich model, especially for the linear form of Freundlich model; (3) The content of crude cellulose in dry matter is one of the main factor affecting the biosorption capacity of a submerged aquatic plant, and -OH and -CONH2 groups of polysaccharides on cell wall maybe are active center of biosorption; (4) According to the coefficients qm of the linear form of Langmuir model, the maximum sorption capacity of Cu2+ was found to be 21.55 mg/g and 10.80mg/g for H. verticillata and M. spicatum, respectively. The maximum specific surface area for H. verticillata for binding Cu2+ was 3.23m2/g, and it was 1.62m2/g for M. spicatum.
Generalized Fractional Derivative Anisotropic Viscoelastic Characterization.
Hilton, Harry H
2012-01-18
Isotropic linear and nonlinear fractional derivative constitutive relations are formulated and examined in terms of many parameter generalized Kelvin models and are analytically extended to cover general anisotropic homogeneous or non-homogeneous as well as functionally graded viscoelastic material behavior. Equivalent integral constitutive relations, which are computationally more powerful, are derived from fractional differential ones and the associated anisotropic temperature-moisture-degree-of-cure shift functions and reduced times are established. Approximate Fourier transform inversions for fractional derivative relations are formulated and their accuracy is evaluated. The efficacy of integer and fractional derivative constitutive relations is compared and the preferential use of either characterization in analyzing isotropic and anisotropic real materials must be examined on a case-by-case basis. Approximate protocols for curve fitting analytical fractional derivative results to experimental data are formulated and evaluated.
Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions
NASA Astrophysics Data System (ADS)
El-Kalla, I. L.; Al-Bugami, A. M.
2010-11-01
In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.
Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania
2013-09-01
Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data. Copyright © 2013 Elsevier Inc. All rights reserved.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
Variable selection for marginal longitudinal generalized linear models.
Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio
2005-06-01
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).
Progress on a generalized coordinates tensor product finite element 3DPNS algorithm for subsonic
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1983-01-01
A generalized coordinates form of the penalty finite element algorithm for the 3-dimensional parabolic Navier-Stokes equations for turbulent subsonic flows was derived. This algorithm formulation requires only three distinct hypermatrices and is applicable using any boundary fitted coordinate transformation procedure. The tensor matrix product approximation to the Jacobian of the Newton linear algebra matrix statement was also derived. Tne Newton algorithm was restructured to replace large sparse matrix solution procedures with grid sweeping using alpha-block tridiagonal matrices, where alpha equals the number of dependent variables. Numerical experiments were conducted and the resultant data gives guidance on potentially preferred tensor product constructions for the penalty finite element 3DPNS algorithm.
Ergon, T; Ergon, R
2017-03-01
Genetic assimilation emerges from selection on phenotypic plasticity. Yet, commonly used quantitative genetics models of linear reaction norms considering intercept and slope as traits do not mimic the full process of genetic assimilation. We argue that intercept-slope reaction norm models are insufficient representations of genetic effects on linear reaction norms and that considering reaction norm intercept as a trait is unfortunate because the definition of this trait relates to a specific environmental value (zero) and confounds genetic effects on reaction norm elevation with genetic effects on environmental perception. Instead, we suggest a model with three traits representing genetic effects that, respectively, (i) are independent of the environment, (ii) alter the sensitivity of the phenotype to the environment and (iii) determine how the organism perceives the environment. The model predicts that, given sufficient additive genetic variation in environmental perception, the environmental value at which reaction norms tend to cross will respond rapidly to selection after an abrupt environmental change, and eventually becomes equal to the new mean environment. This readjustment of the zone of canalization becomes completed without changes in genetic correlations, genetic drift or imposing any fitness costs of maintaining plasticity. The asymptotic evolutionary outcome of this three-trait linear reaction norm generally entails a lower degree of phenotypic plasticity than the two-trait model, and maximum expected fitness does not occur at the mean trait values in the population. © 2016 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.
Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh
2012-03-22
The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.
HOLEGAGE 1.0 - Strain-Gauge Drilling Analysis Program
NASA Technical Reports Server (NTRS)
Hampton, Roy V.
1992-01-01
Interior stresses inferred from changes in surface strains as hole is drilled. Computes stresses using strain data from each drilled-hole depth layer. Planar stresses computed in three ways: least-squares fit for linear variation with depth, integral method to give incremental stress data for each layer, and/or linear fit to integral data. Written in FORTRAN 77.
Carbon dioxide stripping in aquaculture -- part III: model verification
Colt, John; Watten, Barnaby; Pfeiffer, Tim
2012-01-01
Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Kumar, K Vasanth
2007-04-02
Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Rice, T. Maurice; Robinson, Neil J.; Tsvelik, Alexei M.
2017-12-11
Here, the high-temperature normal state of the unconventional cuprate superconductors has resistivity linear in temperature T, which persists to values well beyond the Mott-Ioffe-Regel upper bound. At low temperatures, within the pseudogap phase, the resistivity is instead quadratic in T, as would be expected from Fermi liquid theory. Developing an understanding of these normal phases of the cuprates is crucial to explain the unconventional superconductivity. We present a simple explanation for this behavior, in terms of the umklapp scattering of electrons. This fits within the general picture emerging from functional renormalization group calculations that spurred the Yang-Rice-Zhang ansatz: Umklapp scatteringmore » is at the heart of the behavior in the normal phase.« less
Fitting Higgs data with nonlinear effective theory.
Buchalla, G; Catà, O; Celis, A; Krause, C
2016-01-01
In a recent paper we showed that the electroweak chiral Lagrangian at leading order is equivalent to the conventional [Formula: see text] formalism used by ATLAS and CMS to test Higgs anomalous couplings. Here we apply this fact to fit the latest Higgs data. The new aspect of our analysis is a systematic interpretation of the fit parameters within an EFT. Concentrating on the processes of Higgs production and decay that have been measured so far, six parameters turn out to be relevant: [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. A global Bayesian fit is then performed with the result [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. Additionally, we show how this leading-order parametrization can be generalized to next-to-leading order, thus improving the [Formula: see text] formalism systematically. The differences with a linear EFT analysis including operators of dimension six are also discussed. One of the main conclusions of our analysis is that since the conventional [Formula: see text] formalism can be properly justified within a QFT framework, it should continue to play a central role in analyzing and interpreting Higgs data.
GLASS VISCOSITY AS A FUNCTION OF TEMPERATURE AND COMPOSITION: A MODEL BASED ON ADAM-GIBBS EQUATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrma, Pavel R.
2008-07-01
Within the temperature range and composition region of processing and product forming, the viscosity of commercial and waste glasses spans over 12 orders of magnitude. This paper shows that a generalized Adam-Gibbs relationship reasonably approximates the real behavior of glasses with four temperature-independent parameters of which two are linear functions of the composition vector. The equation is subjected to two constraints, one requiring that the viscosity-temperature relationship approaches the Arrhenius function at high temperatures with a composition-independent pre-exponential factor and the other that the viscosity value is independent of composition at the glass-transition temperature. Several sets of constant coefficients weremore » obtained by fitting the generalized Adam-Gibbs equation to data of two glass families: float glass and Hanford waste glass. Other equations (the Vogel-Fulcher-Tammann equation, original and modified, the Avramov equation, and the Douglass-Doremus equation) were fitted to float glass data series and compared with the Adam-Gibbs equation, showing that Adam-Gibbs glass appears an excellent approximation of real glasses even as compared with other candidate constitutive relations.« less
Cosmological power spectrum in a noncommutative spacetime
NASA Astrophysics Data System (ADS)
Kothari, Rahul; Rath, Pranati K.; Jain, Pankaj
2016-09-01
We propose a generalized star product that deviates from the standard one when the fields are considered at different spacetime points by introducing a form factor in the standard star product. We also introduce a recursive definition by which we calculate the explicit form of the generalized star product at any number of spacetime points. We show that our generalized star product is associative and cyclic at linear order. As a special case, we demonstrate that our recursive approach can be used to prove the associativity of standard star products for same or different spacetime points. The introduction of a form factor has no effect on the standard Lagrangian density in a noncommutative spacetime because it reduces to the standard star product when spacetime points become the same. We show that the generalized star product leads to physically consistent results and can fit the observed data on hemispherical anisotropy in the cosmic microwave background radiation.
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform
Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.
2015-01-01
This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037
Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen
2016-11-01
Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.
Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.
2009-01-01
Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358
NASA Astrophysics Data System (ADS)
Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.
2015-12-01
An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.
Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary
2017-06-14
Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.
Recruit Fitness as a Predictor of Police Academy Graduation.
Shusko, M; Benedetti, L; Korre, M; Eshleman, E J; Farioli, A; Christophi, C A; Kales, S N
2017-10-01
Suboptimal recruit fitness may be a risk factor for poor performance, injury, illness, and lost time during police academy training. To assess the probability of successful completion and graduation from a police academy as a function of recruits' baseline fitness levels at the time of academy entry. Retrospective study where all available records from recruit training courses held (2006-2012) at all Massachusetts municipal police academies were reviewed and analysed. Entry fitness levels were quantified from the following measures, as recorded at the start of each training class: body composition, push-ups, sit-ups, sit-and-reach, and 1.5-mile run-time. The primary outcome of interest was the odds of not successfully graduating from an academy. We used generalized linear mixed models in order to fit logistic regression models with random intercepts for assessing the probability of not graduating, based on entry-level fitness. The primary analyses were restricted to recruits with complete entry-level fitness data. The fitness measures most strongly associated with academy failure were lesser number of push-ups completed (odds ratio [OR] = 5.2, 95% confidence interval [CI] 2.3-11.7, for 20 versus 41-60 push-ups) and slower run times (OR = 3.8, 95% CI 1.8-7.8, [1.5 mile run time of ≥15'20″] versus [12'33″ to 10'37″]). Baseline pushups and 1.5-mile run-time showed the best ability to predict successful academy graduation, especially when considered together. Future research should include prospective validation of entry-level fitness as a predictor of subsequent police academy success. © The Author 2017. Published by Oxford University Press on behalf of the Society of Occupational Medicine.
Injury Incidence and Patterns Among Dutch CrossFit Athletes
Mehrab, Mirwais; de Vos, Robert-Jan; Kraan, Gerald A.; Mathijssen, Nina M.C.
2017-01-01
Background: CrossFit is a strength and conditioning program that has gained widespread recognition, with 11,000 affiliated gyms worldwide. The incidence of injuries during CrossFit training is poorly analyzed. Purpose: To investigate the incidence of injuries for persons participating in CrossFit. Risk factors for injury and injury mechanisms were also explored through athlete demographics and characteristics. Study Design: Descriptive epidemiology study. Methods: A questionnaire that focused on injury incidence in CrossFit in the past year and included data on athlete demographics and characteristics was distributed to all 130 CrossFit gyms in the Netherlands and was also available online in active Facebook groups. Data were collected from July 2015 to January 2016. Inclusion criteria consisted of age ≥18 years and training at a registered CrossFit gym in the Netherlands. A total of 553 participants completed the survey. Univariable and multivariable generalized linear mixed models were used to identify potential risk factors for injury. Results: A total of 449 participants met the inclusion criteria. Of all respondents, 252 athletes (56.1%) sustained an injury in the preceding 12 months. The most injured body parts were the shoulder (n = 87, 28.7%), lower back (n = 48, 15.8%), and knee (n = 25, 8.3%). The duration of participation in CrossFit significantly affected the injury incidence rates (<6 months vs ≥24 months; odds ratio, 3.687 [95% CI, 2.091-6.502]; P < .001). The majority of injuries were caused by overuse (n = 148, 58.7%). Conclusion: The injury incidence for athletes participating in CrossFit was 56.1%. The most frequent injury locations were the shoulder, lower back, and knee. A short duration of participation (<6 months) was significantly associated with an increased risk for injury. PMID:29318170
Injury Incidence and Patterns Among Dutch CrossFit Athletes.
Mehrab, Mirwais; de Vos, Robert-Jan; Kraan, Gerald A; Mathijssen, Nina M C
2017-12-01
CrossFit is a strength and conditioning program that has gained widespread recognition, with 11,000 affiliated gyms worldwide. The incidence of injuries during CrossFit training is poorly analyzed. To investigate the incidence of injuries for persons participating in CrossFit. Risk factors for injury and injury mechanisms were also explored through athlete demographics and characteristics. Descriptive epidemiology study. A questionnaire that focused on injury incidence in CrossFit in the past year and included data on athlete demographics and characteristics was distributed to all 130 CrossFit gyms in the Netherlands and was also available online in active Facebook groups. Data were collected from July 2015 to January 2016. Inclusion criteria consisted of age ≥18 years and training at a registered CrossFit gym in the Netherlands. A total of 553 participants completed the survey. Univariable and multivariable generalized linear mixed models were used to identify potential risk factors for injury. A total of 449 participants met the inclusion criteria. Of all respondents, 252 athletes (56.1%) sustained an injury in the preceding 12 months. The most injured body parts were the shoulder (n = 87, 28.7%), lower back (n = 48, 15.8%), and knee (n = 25, 8.3%). The duration of participation in CrossFit significantly affected the injury incidence rates (<6 months vs ≥24 months; odds ratio, 3.687 [95% CI, 2.091-6.502]; P < .001). The majority of injuries were caused by overuse (n = 148, 58.7%). The injury incidence for athletes participating in CrossFit was 56.1%. The most frequent injury locations were the shoulder, lower back, and knee. A short duration of participation (<6 months) was significantly associated with an increased risk for injury.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milani, G., E-mail: gabriele.milani@polimi.it; Hanel, T.; Donetti, R.
The paper is aimed at studying the possible interaction between two different accelerators (DPG and TBBS) in the chemical kinetic of Natural Rubber (NR) vulcanized with sulphur. The same blend with several DPG and TBBS concentrations is deeply analyzed from an experimental point of view, varying the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. In order to study any possible interaction between the two accelerators –and eventually evaluating its engineering relevance-rheometer data are normalized by means of the well known Sun and Isayev normalization approach and two output parameters are assumed asmore » meaningful to have an insight into the possible interaction, namely time at maximum torque and reversion percentage. Two different numerical meta-models, which belong to the family of the so-called response surfaces RS are compared. The first is linear against TBBS and DPG and therefore well reproduces no interaction between the accelerators, whereas the latter is a non-linear RS with bilinear term. Both RS are deduced from standard best fitting of experimental data available. It is found that, generally, there is a sort of interaction between TBBS and DPG, but that the error introduced making use of a linear model (no interaction) is generally lower than 10%, i.e. fully acceptable from an engineering standpoint.« less
Patel, Deepak; Lambert, Estelle V; da Silva, Roseanne; Greyling, Mike; Kolbe-Alexander, Tracy; Noach, Adam; Conradie, Jaco; Nossel, Craig; Borresen, Jill; Gaziano, Thomas
2011-01-01
A retrospective, longitudinal study examined changes in participation in fitness-related activities and hospital claims over 5 years amongst members of an incentivized health promotion program offered by a private health insurer. A 3-year retrospective observational analysis measuring gym visits and participation in documented fitness-related activities, probability of hospital admission, and associated costs of admission. A South African private health plan, Discovery Health and the Vitality health promotion program. 304,054 adult members of the Discovery medical plan, 192,467 of whom registered for the health promotion program and 111,587 members who were not on the program. Members were incentivised for fitness-related activities on the basis of the frequency of gym visits. Changes in electronically documented gym visits and registered participation in fitness-related activities over 3 years and measures of association between changes in participation (years 1-3) and subsequent probability and costs of hospital admission (years 4-5). Hospital admissions and associated costs are based on claims extracted from the health insurer database. The probability of a claim modeled by using linear logistic regression and costs of claims examined by using general linear models. Propensity scores were estimated and included age, gender, registration for chronic disease benefits, plan type, and the presence of a claim during the transition period, and these were used as covariates in the final model. There was a significant decrease in the prevalence of inactive members (76% to 68%) over 5 years. Members who remained highly active (years 1-3) had a lower probability (p < .05) of hospital admission in years 4 to 5 (20.7%) compared with those who remained inactive (22.2%). The odds of admission were 13% lower for two additional gym visits per week (odds ratio, .87; 95% confidence interval [CI], .801-.949). We observed an increase in fitness-related activities over time amongst members of this incentive-based health promotion program, which was associated with a lower probability of hospital admission and lower hospital costs in the subsequent 2 years. Copyright © 2011 by American Journal of Health Promotion, Inc.
Meta-analysis in Stata using gllamm.
Bagos, Pantelis G
2015-12-01
There are several user-written programs for performing meta-analysis in Stata (Stata Statistical Software: College Station, TX: Stata Corp LP). These include metan, metareg, mvmeta, and glst. However, there are several cases for which these programs do not suffice. For instance, there is no software for performing univariate meta-analysis with correlated estimates, for multilevel or hierarchical meta-analysis, or for meta-analysis of longitudinal data. In this work, we show with practical applications that many disparate models, including but not limited to the ones mentioned earlier, can be fitted using gllamm. The software is very versatile and can handle a wide variety of models with applications in a wide range of disciplines. The method presented here takes advantage of these modeling capabilities and makes use of appropriate transformations, based on the Cholesky decomposition of the inverse of the covariance matrix, known as generalized least squares, in order to handle correlated data. The models described earlier can be thought of as special instances of a general linear mixed-model formulation, but to the author's knowledge, a general exposition in order to incorporate all the available models for meta-analysis as special cases and the instructions to fit them in Stata has not been presented so far. Source code is available at http:www.compgen.org/tools/gllamm. Copyright © 2015 John Wiley & Sons, Ltd.
Merecz, Dorota; Andysz, Aleksandra
2012-06-01
[corrected] Person-Environment fit (P-E fit) paradigm, seems to be especially useful in explaining phenomena related to work attitudes and occupational health. The study explores the relationship between a specific facet of P-E fit as Person-Organization fit (P-O fit) and health. Research was conducted on the random sample of 600 employees. Person-Organization Fit Questionnaire was used to asses the level of Person-Organization fit; mental health status was measured by General Health Questionnaire (GHQ-28); and items from Work Ability Index allowed for evaluation of somatic health. Data was analyzed using non parametric statistical tests. The predictive value of P-O fit for various aspects of health was checked by means of linear regression models. A comparison between the groups distinguished on the basis of their somatic and mental health indicators showed significant differences in the level of overall P-O fit (χ(2) = 23.178; p < 0.001) and its subdimensions: for complementary fit (χ(2) = 29.272; p < 0.001), supplementary fit (χ(2) = 23.059; p < 0.001), and identification with organization (χ(2) = 8.688; p = 0.034). From the perspective of mental health, supplementary P-O fit seems to be important for men's well-being and explains almost 9% of variance in GHQ-28 scores, while in women, complementary fit (5% explained variance in women's GHQ score) and identification with organization (1% explained variance in GHQ score) are significant predictors of mental well-being. Interestingly, better supplementary and complementary fit are related to better mental health, but stronger identification with organization in women produces adverse effect on their mental health. The results show that obtaining the optimal level of P-O fit can be beneficial not only for the organization (e.g. lower turnover, better work effectiveness and commitment), but also for the employees themselves. Optimal level of P-O fit can be considered as a factor maintaining workers' health. However, prospective research is needed to confirm the results obtained in this exploratory study.
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
Reynolds, Matthew R
2013-03-01
The linear loadings of intelligence test composite scores on a general factor (g) have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the g loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of this study was to (a) investigate whether the g loadings of composite scores from the Differential Ability Scales (2nd ed.) (DAS-II, C. D. Elliott, 2007a, Differential Ability Scales (2nd ed.). San Antonio, TX: Pearson) were nonlinear and (b) if they were nonlinear, to compare them with linear g loadings to demonstrate how SLODR alters the interpretation of these loadings. Linear and nonlinear confirmatory factor analysis (CFA) models were used to model Nonverbal Reasoning, Verbal Ability, Visual Spatial Ability, Working Memory, and Processing Speed composite scores in four age groups (5-6, 7-8, 9-13, and 14-17) from the DAS-II norming sample. The nonlinear CFA models provided better fit to the data than did the linear models. In support of SLODR, estimates obtained from the nonlinear CFAs indicated that g loadings decreased as g level increased. The nonlinear portion for the nonverbal reasoning loading, however, was not statistically significant across the age groups. Knowledge of general ability level informs composite score interpretation because g is less likely to produce differences, or is measured less, in those scores at higher g levels. One implication is that it may be more important to examine the pattern of specific abilities at higher general ability levels. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario
2014-10-01
The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.
PGOPHER: A program for simulating rotational, vibrational and electronic spectra
NASA Astrophysics Data System (ADS)
Western, Colin M.
2017-01-01
The PGOPHER program is a general purpose program for simulating and fitting molecular spectra, particularly the rotational structure. The current version can handle linear molecules, symmetric tops and asymmetric tops and many possible transitions, both allowed and forbidden, including multiphoton and Raman spectra in addition to the common electric dipole absorptions. Many different interactions can be included in the calculation, including those arising from electron and nuclear spin, and external electric and magnetic fields. Multiple states and interactions between them can also be accounted for, limited only by available memory. Fitting of experimental data can be to line positions (in many common formats), intensities or band contours and the parameters determined can be level populations as well as rotational constants. PGOPHER is provided with a powerful and flexible graphical user interface to simplify many of the tasks required in simulating, understanding and fitting molecular spectra, including Fortrat diagrams and energy level plots in addition to overlaying experimental and simulated spectra. The program is open source, and can be compiled with open source tools. This paper provides a formal description of the operation of version 9.1.
Golestanirad, Laleh; Keil, Boris; Angelone, Leonardo M.; Bonmassar, Giorgio; Mareyam, Azma; Wald, Lawrence L.
2016-01-01
Purpose MRI of patients with deep brain stimulation (DBS) implants is strictly limited due to safety concerns, including high levels of local specific absorption rate (SAR) of radiofrequency (RF) fields near the implant and related RF-induced heating. This study demonstrates the feasibility of using a rotating linearly polarized birdcage transmitter and a 32-channel close-fit receive array to significantly reduce local SAR in MRI of DBS patients. Methods Electromagnetic simulations and phantom experiments were performed with generic DBS lead geometries and implantation paths. The technique was based on mechanically rotating a linear birdcage transmitter to align its zero electric-field region with the implant while using a close-fit receive array to significantly increase signal to noise ratio of the images. Results It was found that the zero electric-field region of the transmitter is thick enough at 1.5 Tesla to encompass DBS lead trajectories with wire segments that were up to 30 degrees out of plane, as well as leads with looped segments. Moreover, SAR reduction was not sensitive to tissue properties, and insertion of a close-fit 32-channel receive array did not degrade the SAR reduction performance. Conclusion The ensemble of rotating linear birdcage and 32-channel close-fit receive array introduces a promising technology for future improvement of imaging in patients with DBS implants. PMID:27059266
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Ramo, Nicole L.; Puttlitz, Christian M.
2018-01-01
Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Molina, J; Sued, M; Valdora, M
2018-06-05
Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.
Inversion for the driving forces of plate tectonics
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1983-01-01
Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.
Huang, Yi-Fei; Gulko, Brad; Siepel, Adam
2017-04-01
Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution
NASA Astrophysics Data System (ADS)
Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.
2017-09-01
In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Four theorems on the psychometric function.
May, Keith A; Solomon, Joshua A
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise) x β(Transducer), where β(Noise) is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer) depends on the transducer. We derive general expressions for β(Noise) and β(Transducer), from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx)(b), β ≈ β(Noise) x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian.
D'Agostino, Emily M; Day, Sophia E; Konty, Kevin J; Larkin, Michael; Saha, Subir; Wyka, Katarzyna
2018-03-01
One-fifth to one-third of students in high poverty, urban school districts do not attend school regularly (missing ≥6 days/year). Fitness is shown to be associated with absenteeism, although this relationship may differ across poverty and gender subgroups. Six cohorts of New York City public school students were followed up from grades 5 to 8 during 2006/2007-2012/2013 (n = 349,381). Stratified three-level longitudinal generalized linear mixed models were used to test the association between changes in fitness and 1-year lagged child-specific days absent across gender and poverty. In girls attending schools in high/very high poverty areas, greater improvements in fitness the prior year were associated with greater reductions in absenteeism (P = .034). Relative to the reference group (>20% decrease in fitness composite percentile scores from the prior year), girls with a large increase in fitness (>20%) demonstrated 10.3% fewer days absent (incidence rate ratio [IRR] 95% confidence interval [CI]: 0.834, 0.964), followed by those who had a 10%-20% increase in fitness (9.2%; IRR 95% CI: 0.835, 0.987), no change (5.4%; IRR 95% CI: 0.887, 1.007), and a 10%-20% decrease in fitness (3.8%; IRR 95% CI: 0.885, 1.045). In girls attending schools in low/mid poverty areas, fitness and absenteeism also had an inverse relationship, but no clear trend emerged. In boys, fitness and absenteeism had an inverse relationship but was not significant in either poverty group. Fitness improvements may be more important to reducing absenteeism in high/very high poverty girls compared with low/mid poverty girls and both high/very high and low/mid poverty boys. Expanding school-based physical activity programs for youth particularly in high poverty neighborhoods may increase student attendance. Copyright © 2018 Elsevier Inc. All rights reserved.
A Kp-based model of auroral boundaries
NASA Astrophysics Data System (ADS)
Carbary, James F.
2005-10-01
The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
Analyses of Field Test Data at the Atucha-1 Spent Fuel Pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, S.
A field test was conducted at the Atucha-1 spent nuclear fuel pools to validate a software package for gross defect detection that is used in conjunction with the inspection tool, Spent Fuel Neutron Counter (SFNC). A set of measurements was taken with the SFNC and the software predictions were compared with these data and analyzed. The data spanned a wide range of cooling times and a set of burnup levels leading to count rates from the several hundreds to around twenty per second. The current calibration in the software using linear fitting required the use of multiple calibration factors tomore » cover the entire range of count rates recorded. The solution to this was to use power regression data fitting to normalize the predicted response and derive one calibration factor that can be applied to the entire set of data. The resulting comparisons between the predicted and measured responses were generally good and provided a quantitative method of detecting missing fuel in virtually all situations. Since the current version of the software uses the linear calibration method, it would need to be updated with the new power regression method to make it more user-friendly for real time verification and fieldable for the range of responses that will be encountered.« less
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O
1997-10-13
In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.
Measuring the intangibles: a metrics for the economic complexity of countries and products.
Cristelli, Matthieu; Gabrielli, Andrea; Tacchella, Andrea; Caldarelli, Guido; Pietronero, Luciano
2013-01-01
We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation.
Measuring the Intangibles: A Metrics for the Economic Complexity of Countries and Products
Cristelli, Matthieu; Gabrielli, Andrea; Tacchella, Andrea; Caldarelli, Guido; Pietronero, Luciano
2013-01-01
We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation. PMID:23940633
Watanabe, Hiroyuki; Miyazaki, Hiroyasu
2006-01-01
Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
[Age index and an interpretation of survivorship curves (author's transl)].
Lohmann, W
1977-01-01
Clinical investigations showed that the age dependences of physiological functions do not show -- as generally assumed -- a linear increase with age, but an exponential one. Considering this result one can easily interpret the survivorship curve of a population (Gompertz plot). The only thing that is required is that the probability of death (death rate) is proportional to a function of ageing given by mu(t) = mu0 exp (alpha t). Considering survivorship curves resulting from annual death statistics and fitting them by suitable parameters, then the resulting alpha-values are in agreement with clinical data.
Advanced Statistics for Exotic Animal Practitioners.
Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G
2017-09-01
Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.
Zonal flow as pattern formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Jeffrey B.; Krommes, John A.
2013-10-15
Zonal flows are well known to arise spontaneously out of turbulence. We show that for statistically averaged equations of the stochastically forced generalized Hasegawa-Mima model, steady-state zonal flows, and inhomogeneous turbulence fit into the framework of pattern formation. There are many implications. First, the wavelength of the zonal flows is not unique. Indeed, in an idealized, infinite system, any wavelength within a certain continuous band corresponds to a solution. Second, of these wavelengths, only those within a smaller subband are linearly stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets.
Understanding Solubility through Excel Spreadsheets
NASA Astrophysics Data System (ADS)
Brown, Pamela
2001-02-01
This article describes assignments related to the solubility of inorganic salts that can be given in an introductory general chemistry course. Le Châtelier's principle, solubility, unit conversion, and thermodynamics are tied together to calculate heats of solution by two methods: heats of formation and an application of the van't Hoff equation. These assignments address the need for math, graphing, and computer skills in the chemical technology program by developing skill in the use of Microsoft Excel to prepare spreadsheets and graphs and to perform linear and nonlinear curve-fitting. Background information on the value of understanding and predicting solubility is provided.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
A policy-capturing study of the simultaneous effects of fit with jobs, groups, and organizations.
Kristof-Brown, Amy L; Jansen, Karen J; Colbert, Amy E
2002-10-01
The authors report an experimental policy-capturing study that examines the simultaneous impact of person-job (PJ), person-group (PG), and person-organization (PO) fit on work satisfaction. Using hierarchical linear modeling, the authors determined that all 3 types of fit had important, independent effects on satisfaction. Work experience explained systematic differences in how participants weighted each type of fit. Multiple interactions also showed participants used complex strategies for combining fit cues.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
ACCELERATED FITTING OF STELLAR SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter
2016-07-20
Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less
An alternative to the breeder's and Lande's equations.
Houchmandzadeh, Bahram
2014-01-10
The breeder's equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) - E(Z0) to the selection differential S = E(ZW) - E(Z0) through a simple proportionality relation R = h(2)S, where the heritability coefficient h(2) is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R' = j(2)S' can be derived, regardless of the parental genotype distribution. Here R' = E(Z1) - μ and S' = E(ZW) - μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j(2) is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande's equation Δ(z) = GP(-1)S. The linearity coefficient of the alternative equation are not changed by Gaussian selection.
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
46 CFR 45.129 - Hull fittings: General.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Hull fittings: General. 45.129 Section 45.129 Shipping... Assignment § 45.129 Hull fittings: General. Hull fittings must be securely mounted in the hull so as to avoid increases in hull stresses and must be protected from local damage caused by movement of equipment or cargo. ...
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.
Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z
2017-03-01
A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.
Inference of gene regulatory networks from genome-wide knockout fitness data
Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.
2013-01-01
Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Loprinzi, Paul D; Cardinal, Bradley J; Cardinal, Marita K; Corbin, Charles B
2018-03-01
The purpose of this study was to examine the associations between physical education (PE) and sports involvement with physical activity (PA), physical fitness, and beliefs about PA among a national sample of adolescents. Data from the National Health and Nutrition Examination Survey National Youth Fitness Survey were used. A total of 459 adolescents aged 12 to 15 years. Adolescents self-reported engagement in the above parameters; muscular fitness objectively determined. Multivariable linear regression. Adolescents who had PE during school days had a higher enjoyment of participating in PE (β = 0.32; P = .01), engaged in more days of being physically active for ≥60 min/d (β = 1.02; P < .001), and performed the plank fitness test longer (β = 17.2; P = .002). Adolescents who played school sports reported that more PA was needed for good health (β = 0.23; P = .04), had a higher enjoyment of participating in PE (β = 0.31; P = .003), engaged in more days of being physically active for ≥60 min/d (β = 0.70; P = .01), performed more pull-ups (β = 2.33; P = .008), had a stronger grip strength (β = 2.5; P = .01), and performed the plank fitness test longer (β = 11.6; P = .04). Adolescents who had PE during school, who had more frequent and long-lasting PE, and who played school sports generally had more accurate perceptions of the amount of PA needed for good health, had greater enjoyment of PE, were more physically active, and performed better on several muscular fitness-related tests. This underscores the importance of PE integration in the schools and encouragement of school sports participation.
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.
Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles
2016-02-01
In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
Chen, Chen; Xie, Yuanchang
2016-06-01
Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
NASA Astrophysics Data System (ADS)
van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.
2009-05-01
GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice models for composite rheology as it increases geoid rates and improves sea level fit at some sites.
Mixture Model for Determination of Shock Equation of State
2012-07-25
not considered in this paper. III. COMPARISON WITH EXPERIMENTAL DATA A. Two-constituent composites 1. Uranium- rhodium composite Uranium- rhodium (U...sound speed, Co, and S were determined from linear least squares fit to the available data22 as shown in Figs. 1(a) and 1(b) for uranium and rhodium ...overpredicts the experimental data, with an average deviation, dUs,/Us of 0.05, shown in Fig. 2(b). The linear fits for uranium and rhodium are shown for
Monitoring techniques for high accuracy interference fit assembly processes
NASA Astrophysics Data System (ADS)
Liuti, A.; Vedugo, F. Rodriguez; Paone, N.; Ungaro, C.
2016-06-01
In the automotive industry, there are many assembly processes that require a high geometric accuracy, in the micrometer range; generally open-loop controllers cannot meet these requirements. This results in an increased defect rate and high production costs. This paper presents an experimental study of interference fit process, aimed to evaluate the aspects which have the most impact on the uncertainty in the final positioning. The press-fitting process considered, consists in a press machine operating with a piezoelectric actuator to press a plug into a sleeve. Plug and sleeve are designed and machined to obtain a known interference fit. Differential displacement and velocity measurements of the plug with respect to the sleeve are measured by a fiber optic differential laser Doppler vibrometer. Different driving signals of the piezo actuator allow to have an insight into the differences between a linear and a pulsating press action. The paper highlights how the press-fit assembly process is characterized by two main phases: the first is an elastic deformation of the plug and sleeve, which produces a reversible displacement, the second is a sliding of the plug with respect to the sleeve, which results in an irreversible displacement and finally realizes the assembly. The simultaneous measurements of the displacement and the force have permitted to define characteristic features in the signal useful to identify the start of the irreversible movement. These indicators could be used to develop a control logic in a press assembly process.
The two-state dimer receptor model: a general model for receptor dimers.
Franco, Rafael; Casadó, Vicent; Mallol, Josefa; Ferrada, Carla; Ferré, Sergi; Fuxe, Kjell; Cortés, Antoni; Ciruela, Francisco; Lluis, Carmen; Canela, Enric I
2006-06-01
Nonlinear Scatchard plots are often found for agonist binding to G-protein-coupled receptors. Because there is clear evidence of receptor dimerization, these nonlinear Scatchard plots can reflect cooperativity on agonist binding to the two binding sites in the dimer. According to this, the "two-state dimer receptor model" has been recently derived. In this article, the performance of the model has been analyzed in fitting data of agonist binding to A(1) adenosine receptors, which are an example of receptor displaying concave downward Scatchard plots. Analysis of agonist/antagonist competition data for dopamine D(1) receptors using the two-state dimer receptor model has also been performed. Although fitting to the two-state dimer receptor model was similar to the fitting to the "two-independent-site receptor model", the former is simpler, and a discrimination test selects the two-state dimer receptor model as the best. This model was also very robust in fitting data of estrogen binding to the estrogen receptor, for which Scatchard plots are concave upward. On the one hand, the model would predict the already demonstrated existence of estrogen receptor dimers. On the other hand, the model would predict that concave upward Scatchard plots reflect positive cooperativity, which can be neither predicted nor explained by assuming the existence of two different affinity states. In summary, the two-state dimer receptor model is good for fitting data of binding to dimeric receptors displaying either linear, concave upward, or concave downward Scatchard plots.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
Geszke-Moritz, Małgorzata; Moritz, Michał
2016-12-01
The present study deals with the adsorption of boldine onto pure and propyl-sulfonic acid-functionalized SBA-15, SBA-16 and mesocellular foam (MCF) materials. Siliceous adsorbents were characterized by nitrogen sorption analysis, transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier-transform infrared (FT-IR) spectroscopy and thermogravimetric analysis. The equilibrium adsorption data were analyzed using the Langmuir, Freundlich, Redlich-Peterson, and Temkin isotherms. Moreover, the Dubinin-Radushkevich and Dubinin-Astakhov isotherm models based on the Polanyi adsorption potential were employed. The latter was calculated using two alternative formulas including solubility-normalized (S-model) and empirical C-model. In order to find the best-fit isotherm, both linear regression and nonlinear fitting analysis were carried out. The Dubinin-Astakhov (S-model) isotherm revealed the best fit to the experimental points for adsorption of boldine onto pure mesoporous materials using both linear and nonlinear fitting analysis. Meanwhile, the process of boldine sorption onto modified silicas was described the best by the Langmuir and Temkin isotherms using linear regression and nonlinear fitting analysis, respectively. The values of adsorption energy (below 8kJ/mol) indicate the physical nature of boldine adsorption onto unmodified silicas whereas the ionic interactions seem to be the main force of alkaloid adsorption onto functionalized sorbents (energy of adsorption above 8kJ/mol). Copyright © 2016 Elsevier B.V. All rights reserved.
SU-F-T-130: [18F]-FDG Uptake Dose Response in Lung Correlates Linearly with Proton Therapy Dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; Titt, U; Mirkovic, D
2016-06-15
Purpose: Analysis of clinical outcomes in lung cancer patients treated with protons using 18F-FDG uptake in lung as a measure of dose response. Methods: A test case lung cancer patient was selected in an unbiased way. The test patient’s treatment planning and post treatment positron emission tomography (PET) were collected from picture archiving and communication system at the UT M.D. Anderson Cancer Center. Average computerized tomography scan was registered with post PET/CT through both rigid and deformable registrations for selected region of interest (ROI) via VelocityAI imaging informatics software. For the voxels in the ROI, a system that extracts themore » Standard Uptake Value (SUV) from PET was developed, and the corresponding relative biological effectiveness (RBE) weighted (both variable and constant) dose was computed using the Monte Carlo (MC) methods. The treatment planning system (TPS) dose was also obtained. Using histogram analysis, the voxel average normalized SUV vs. 3 different doses was obtained and linear regression fit was performed. Results: From the registration process, there were some regions that showed significant artifacts near the diaphragm and heart region, which yielded poor r-squared values when the linear regression fit was performed on normalized SUV vs. dose. Excluding these values, TPS fit yielded mean r-squared value of 0.79 (range 0.61–0.95), constant RBE fit yielded 0.79 (range 0.52–0.94), and variable RBE fit yielded 0.80 (range 0.52–0.94). Conclusion: A system that extracts SUV from PET to correlate between normalized SUV and various dose calculations was developed. A linear relation between normalized SUV and all three different doses was found.« less
Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D
2011-04-01
Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-05-13
Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Keidser, Gitte; Rohrseitz, Kristin; Dillon, Harvey; Hamacher, Volkmar; Carter, Lyndal; Rass, Uwe; Convery, Elizabeth
2006-10-01
This study examined the effect that signal processing strategies used in modern hearing aids, such as multi-channel WDRC, noise reduction, and directional microphones have on interaural difference cues and horizontal localization performance relative to linear, time-invariant amplification. Twelve participants were bilaterally fitted with BTE devices. Horizontal localization testing using a 360 degrees loudspeaker array and broadband pulsed pink noise was performed two weeks, and two months, post-fitting. The effect of noise reduction was measured with a constant noise present at 80 degrees azimuth. Data were analysed independently in the left/right and front/back dimension and showed that of the three signal processing strategies, directional microphones had the most significant effect on horizontal localization performance and over time. Specifically, a cardioid microphone could decrease front/back errors over time, whereas left/right errors increased when different microphones were fitted to left and right ears. Front/back confusions were generally prominent. Objective measurements of interaural differences on KEMAR explained significant shifts in left/right errors. In conclusion, there is scope for improving the sense of localization in hearing aid users.
An Inquiry-Based Linear Algebra Class
ERIC Educational Resources Information Center
Wang, Haohao; Posey, Lisa
2011-01-01
Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…
Mamen, Asgeir; Fredriksen, Per Morten
2018-05-01
As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.
NASA Astrophysics Data System (ADS)
Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.
2018-01-01
Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
NASA Astrophysics Data System (ADS)
Conde, P.; Iborra, A.; González, A. J.; Hernández, L.; Bellido, P.; Moliner, L.; Rigla, J. P.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Seimetz, M.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.
2016-02-01
In Positron Emission Tomography (PET) detectors based on monolithic scintillators, the photon interaction position needs to be estimated from the light distribution (LD) on the photodetector pixels. Due to the finite size of the scintillator volume, the symmetry of the LD is truncated everywhere except for the crystal center. This effect produces a poor estimation of the interaction positions towards the edges, an especially critical situation when linear algorithms, such as Center of Gravity (CoG), are used. When all the crystal faces are painted black, except the one in contact with the photodetector, the LD can be assumed to behave as the inverse square law, providing a simple theoretical model. Using this LD model, the interaction coordinates can be determined by means of fitting each event to a theoretical distribution. In that sense, the use of neural networks (NNs) has been shown to be an effective alternative to more traditional fitting techniques as nonlinear least squares (LS). The multilayer perceptron is one type of NN which can model non-linear functions well and can be trained to accurately generalize when presented with new data. In this work we have shown the capability of NNs to approximate the LD and provide the interaction coordinates of γ-photons with two different photodetector setups. One experimental setup was based on analog Silicon Photomultipliers (SiPMs) and a charge division diode network, whereas the second setup was based on digital SiPMs (dSiPMs). In both experiments NNs minimized border effects. Average spatial resolutions of 1.9 ±0.2 mm and 1.7 ±0.2 mm for the entire crystal surface were obtained for the analog and dSiPMs approaches, respectively.
Novoderezhkin, Vladimir I.; Dekker, Jan P.; van Grondelle, Rienk
2007-01-01
We propose an exciton model for the Photosystem II reaction center (RC) based on a quantitative simultaneous fit of the absorption, linear dichroism, circular dichroism, steady-state fluorescence, triplet-minus-singlet, and Stark spectra together with the spectra of pheophytin-modified RCs, and so-called RC5 complexes that lack one of the peripheral chlorophylls. In this model, the excited state manifold includes a primary charge-transfer (CT) state that is supposed to be strongly mixed with the pure exciton states. We generalize the exciton theory of Stark spectra by 1), taking into account the coupling to a CT state (whose static dipole cannot be treated as a small parameter in contrast to usual excited states); and 2), expressing the line shape functions in terms of the modified Redfield approach (the same as used for modeling of the linear responses). This allows a consistent modeling of the whole set of experimental data using a unified physical picture. We show that the fluorescence and Stark spectra are extremely sensitive to the assignment of the primary CT state, its energy, and coupling to the excited states. The best fit of the data is obtained supposing that the initial charge separation occurs within the special-pair PD1PD2. Additionally, the scheme with primary electron transfer from the accessory chlorophyll to pheophytin gave a reasonable quantitative fit. We show that the effectiveness of these two pathways is strongly dependent on the realization of the energetic disorder. Supposing a mixed scheme of primary charge separation with a disorder-controlled competition of the two channels, we can explain the coexistence of fast sub-ps and slow ps components of the Phe-anion formation as revealed by different ultrafast spectroscopic techniques. PMID:17526589
High resolution particle tracking method by suppressing the wavefront aberrations
NASA Astrophysics Data System (ADS)
Chang, Xinyu; Yang, Yuan; Kou, Li; Jin, Lei; Lu, Junsheng; Hu, Xiaodong
2018-01-01
Digital in-line holographic microscopy is one of the most efficient methods for particle tracking as it can precisely measure the axial position of particles. However, imaging systems are often limited by detector noise, image distortions and human operator misjudgment making the particles hard to locate. A general method is used to solve this problem. The normalized holograms of particles were reconstructed to the pupil plane and then fit to a linear superposition of the Zernike polynomial functions to suppress the aberrations. Relative experiments were implemented to validate the method and the results show that nanometer scale resolution was achieved even when the holograms were poorly recorded.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
Variable sound speed in interacting dark energy models
NASA Astrophysics Data System (ADS)
Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy
2018-04-01
We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.
The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.
Nevill, Alan M; Cooke, Carlton B
2017-05-01
This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.
Cardiorespiratory fitness and future risk of pneumonia: a long-term prospective cohort study.
Kunutsor, Setor K; Laukkanen, Tanjaniina; Laukkanen, Jari A
2017-09-01
We aimed to assess the prospective association of cardiorespiratory fitness (CRF) with the risk of pneumonia. Cardiorespiratory fitness, as measured by maximal oxygen uptake, was assessed using a respiratory gas exchange analyzer in 2244 middle-aged men in the Kuopio Ischemic Heart Disease cohort. We corrected for within-person variability in CRF levels using data from repeat measurements taken several years apart. During a median follow-up of 25.8 years, 369 men received a hospital diagnosis of pneumonia. The age-adjusted regression dilution ratio of CRF was 0.58 (95% confidence interval: 0.53-0.63). Cardiorespiratory fitness was linearly associated with pneumonia risk. The hazard ratio (95% confidence interval) for pneumonia per 1 standard deviation increase in CRF in analysis adjusted for several risk factors for pneumonia was 0.77 (0.68-0.87). The association remained consistent on additional adjustment for total energy intake, socioeconomic status, physical activity, and C-reactive protein 0.82 (0.72-0.94). The corresponding adjusted hazard ratios (95% confidence intervals) were 0.58 (0.41-0.80) and 0.67 (0.48-0.95) respectively, when comparing the extreme quartiles of CRF levels. Our findings indicate a graded inverse and independent association between CRF and the future risk of pneumonia in a general male population. Copyright © 2017 Elsevier Inc. All rights reserved.
Fernandes, Amanda Paula; Andrade, Amanda Cristina de Souza; Ramos, Cynthia Graciane Carvalho; Friche, Amélia Augusta de Lima; Dias, Maria Angélica de Salles; Xavier, César Coelho; Proietti, Fernando Augusto; Caiaffa, Waleska Teixeira
2015-11-01
This study analyzed leisure-time physical activity among 1,621 adults who were non-users of the Academias da Cidade Program in Belo Horizonte, Minas Gerais State, Brazil, but who lived in the vicinity of a fitness center in operation (exposed Group I) or in the vicinity of two sites reserved for future installation of centers (control Groups II and III). The dependent variable was leisure-time physical activity, and linear distance from the households to the fitness centers was the exposure variable, categorized in radial buffers: < 500m; 500-1,000m; and 1,000-1,500m. Binary logistic regression was performed with the Generalized Estimation Equations method. Residents living within < 500m of the fitness center gave better ratings to the physical environment when compared to those living in the 1,000 and 1,500m buffers and showed higher odds of leisure-time physical activity (OR = 1.16; 95%CI: 1.03-1.30), independently of socio-demographic factors; the same was not observed in the control groups (II and III). The findings suggests the program's potential for influencing physical activity in the population living closer to the fitness center and thus provide a strategic alternative for mitigating inequalities in leisure-time physical activity.
Measurement of effective air diffusion coefficients for trichloroethene in undisturbed soil cores.
Bartelt-Hunt, Shannon L; Smith, James A
2002-06-01
In this study, we measure effective diffusion coefficients for trichloroethene in undisturbed soil samples taken from Picatinny Arsenal, New Jersey. The measured effective diffusion coefficients ranged from 0.0053 to 0.0609 cm2/s over a range of air-filled porosity of 0.23-0.49. The experimental data were compared to several previously published relations that predict diffusion coefficients as a function of air-filled porosity and porosity. A multiple linear regression analysis was developed to determine if a modification of the exponents in Millington's [Science 130 (1959) 100] relation would better fit the experimental data. The literature relations appeared to generally underpredict the effective diffusion coefficient for the soil cores studied in this work. Inclusion of a particle-size distribution parameter, d10, did not significantly improve the fit of the linear regression equation. The effective diffusion coefficient and porosity data were used to recalculate estimates of diffusive flux through the subsurface made in a previous study performed at the field site. It was determined that the method of calculation used in the previous study resulted in an underprediction of diffusive flux from the subsurface. We conclude that although Millington's [Science 130 (1959) 100] relation works well to predict effective diffusion coefficients in homogeneous soils with relatively uniform particle-size distributions, it may be inaccurate for many natural soils with heterogeneous structure and/or non-uniform particle-size distributions.
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L
2015-11-01
In this paper, rigorous data and adequate models about linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil are presented, contributing with a substantial improvement over available adsorption works. The kinetics of the adsorption/desorption phenomenon and the adsorption/desorption equilibrium isotherms were determined through batch studies for total LAS amount and also for each homologue series: C10, C11, C12 and C13. The proposed multiple pseudo-first order kinetic model provides the best fit to the kinetic data, indicating the presence of two adsorption/desorption processes in the general phenomenon. Equilibrium adsorption and desorption data have been properly fitted by a model consisting of a Langmuir plus quadratic term, which provides a good integrated description of the experimental data over a wide range of concentrations. At low concentrations, the Langmuir term explains the adsorption of LAS on soil sites which are highly selective of the n-alkyl groups and cover a very small fraction of the soil surface area, whereas the quadratic term describes adsorption on the much larger part of the soil surface and on LAS retained at moderate to high concentrations. Since adsorption/desorption phenomenon plays a major role in the LAS behavior in soils, relevant conclusions can be drawn from the obtained results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
NASA Astrophysics Data System (ADS)
Abdussalam, Auwal; Monaghan, Andrew; Dukic, Vanja; Hayden, Mary; Hopson, Thomas; Leckebusch, Gregor
2013-04-01
Northwest Nigeria is a region with high risk of bacterial meningitis. Since the first documented epidemic of meningitis in Nigeria in 1905, the disease has been endemic in the northern part of the country, with epidemics occurring regularly. In this study we examine the influence of climate on the interannual variability of meningitis incidence and epidemics. Monthly aggregate counts of clinically confirmed hospital-reported cases of meningitis were collected in northwest Nigeria for the 22-year period spanning 1990-2011. Several generalized linear statistical models were fit to the monthly meningitis counts, including generalized additive models. Explanatory variables included monthly records of temperatures, humidity, rainfall, wind speed, sunshine and dustiness from weather stations nearest to the hospitals, and a time series of polysaccharide vaccination efficacy. The effects of other confounding factors -- i.e., mainly non-climatic factors for which records were not available -- were estimated as a smooth, monthly-varying function of time in the generalized additive models. Results reveal that the most important explanatory climatic variables are mean maximum monthly temperature, relative humidity and dustiness. Accounting for confounding factors (e.g., social processes) in the generalized additive models explains more of the year-to-year variation of meningococcal disease compared to those generalized linear models that do not account for such factors. Promising results from several models that included only explanatory variables that preceded the meningitis case data by 1-month suggest there may be potential for prediction of meningitis in northwest Nigeria to aid decision makers on this time scale.
NASA Astrophysics Data System (ADS)
Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng
2018-05-01
Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.
NASA Astrophysics Data System (ADS)
Svoboda, Aaron A.; Forbes, Jeffrey M.; Miyahara, Saburo
2005-11-01
A self-consistent global tidal climatology, useful for comparing and interpreting radar observations from different locations around the globe, is created from space-based Upper Atmosphere Research Satellite (UARS) horizontal wind measurements. The climatology created includes tidal structures for horizontal winds, temperature and relative density, and is constructed by fitting local (in latitude and height) UARS wind data at 95 km to a set of basis functions called Hough mode extensions (HMEs). These basis functions are numerically computed modifications to Hough modes and are globally self-consistent in wind, temperature, and density. We first demonstrate this self-consistency with a proxy data set from the Kyushu University General Circulation Model, and then use a linear weighted superposition of the HMEs obtained from monthly fits to the UARS data to extrapolate the global, multi-variable tidal structure. A brief explanation of the HMEs’ origin is provided as well as information about a public website that has been set up to make the full extrapolated data sets available.
Zhai, Chun-Hui; Xuan, Jian-Bang; Fan, Hai-Liu; Zhao, Teng-Fei; Jiang, Jian-Lan
2018-05-03
In order to make a further optimization of process design via increasing the stability of design space, we brought in the model of Support Vector Regression (SVR). In this work, the extraction of podophyllotoxin was researched as a case study based on Quality by Design (QbD). We compared the fitting effect of SVR and the most used quadratic polynomial model (QPM) in QbD, and an analysis was made between the two design spaces obtained by SVR and QPM. As a result, the SVR stayed ahead of QPM in prediction accuracy, the stability of model and the generalization ability. The introduction of SVR into QbD made the extraction process of podophyllotoxin well designed and easier to control. The better fitting effect of SVR improved the application effect of QbD and the universal applicability of SVR, especially for non-linear, complicated and weak-regularity problems, widened the application field of QbD.
A Linearized and Incompressible Constitutive Model for Arteries
Liu, Y.; Zhang, W.; Wang, C.; Kassab, G. S.
2011-01-01
In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke’s law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young’s moduli E1 (circumferential), E2 (axial), and E3 (radial) are necessary to fit the data. The advantages and limitations of this model are discussed. PMID:21605567
A linearized and incompressible constitutive model for arteries.
Liu, Y; Zhang, W; Wang, C; Kassab, G S
2011-10-07
In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke's law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young's moduli E₁ (circumferential), E₂ (axial), and E₃ (radial) are necessary to fit the data. The advantages and limitations of this model are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
The clustering of QSOs and the dark matter halos that host them
NASA Astrophysics Data System (ADS)
Zhao, Dong-Yao; Yan, Chang-Shuo; Lu, Youjun
2013-10-01
The spatial clustering of QSOs is an important measurable quantity which can be used to infer the properties of dark matter halos that host them. We construct a simple QSO model to explain the linear bias of QSOs measured by recent observations and explore the properties of dark matter halos that host a QSO. We assume that major mergers of dark matter halos can lead to the triggering of QSO phenomena, and the evolution of luminosity for a QSO generally shows two accretion phases, i.e., initially having a constant Eddington ratio due to the self-regulation of the accretion process when supply is sufficient, and then declining in rate with time as a power law due to either diminished supply or long term disk evolution. Using a Markov Chain Monte Carlo method, the model parameters are constrained by fitting the observationally determined QSO luminosity functions (LFs) in the hard X-ray and in the optical band simultaneously. Adopting the model parameters that best fit the QSO LFs, the linear bias of QSOs can be predicted and then compared with the observational measurements by accounting for various selection effects in different QSO surveys. We find that the latest measurements of the linear bias of QSOs from both the SDSS and BOSS QSO surveys can be well reproduced. The typical mass of SDSS QSOs at redshift 1.5 < z < 4.5 is ~ (3 - 6) × 1012 h-1 Msolar and the typical mass of BOSS QSOs at z ~ 2.4 is ~ 2 × 1012 h-1 Msolar. For relatively faint QSOs, the mass distribution of their host dark matter halos is wider than that of bright QSOs because faint QSOs can be hosted in both big halos and smaller halos, but bright QSOs are only hosted in big halos, which is part of the reason for the predicted weak dependence of the linear biases on the QSO luminosity.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Temperature dependence of elastic and strength properties of T300/5208 graphite-epoxy
NASA Technical Reports Server (NTRS)
Milkovich, S. M.; Herakovich, C. T.
1984-01-01
Experimental results are presented for the elastic and strength properties of T300/5208 graphite-epoxy at room temperature, 116K (-250 F), and 394K (+250 F). Results are presented for unidirectional 0, 90, and 45 degree laminates, and + or - 30, + or - 45, and + or - 60 degree angle-ply laminates. The stress-strain behavior of the 0 and 90 degree laminates is essentially linear for all three temperatures and that the stress-strain behavior of all other laminates is linear at 116K. A second-order curve provides the best fit for the temperature is linear at 116K. A second-order curve provides the best fit for the temperature dependence of the elastic modulus of all laminates and for the principal shear modulus. Poisson's ratio appears to vary linearly with temperature. all moduli decrease with increasing temperature except for E (sub 1) which exhibits a small increase. The strength temperature dependence is also quadratic for all laminates except the 0 degree - laminate which exhibits linear temperature dependence. In many cases the temperature dependence of properties is nearly linear.
NASA Astrophysics Data System (ADS)
Vassiliev, Oleg N.; Grosshans, David R.; Mohan, Radhe
2017-10-01
We propose a new formalism for calculating parameters α and β of the linear-quadratic model of cell survival. This formalism, primarily intended for calculating relative biological effectiveness (RBE) for treatment planning in hadron therapy, is based on a recently proposed microdosimetric revision of the single-target multi-hit model. The main advantage of our formalism is that it reliably produces α and β that have correct general properties with respect to their dependence on physical properties of the beam, including the asymptotic behavior for very low and high linear energy transfer (LET) beams. For example, in the case of monoenergetic beams, our formalism predicts that, as a function of LET, (a) α has a maximum and (b) the α/β ratio increases monotonically with increasing LET. No prior models reviewed in this study predict both properties (a) and (b) correctly, and therefore, these prior models are valid only within a limited LET range. We first present our formalism in a general form, for polyenergetic beams. A significant new result in this general case is that parameter β is represented as an average over the joint distribution of energies E 1 and E 2 of two particles in the beam. This result is consistent with the role of the quadratic term in the linear-quadratic model. It accounts for the two-track mechanism of cell kill, in which two particles, one after another, damage the same site in the cell nucleus. We then present simplified versions of the formalism, and discuss predicted properties of α and β. Finally, to demonstrate consistency of our formalism with experimental data, we apply it to fit two sets of experimental data: (1) α for heavy ions, covering a broad range of LETs, and (2) β for protons. In both cases, good agreement is achieved.
Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit; Ruan, Dan
2016-01-15
Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for themore » trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al. approach, while performing comparably to Ruan and Keall. Conclusions: This work improves upon the quality of the Sawant et al. approach, but does so without sacrificing run-time performance. In addition, using this framework allows for complex leaf-fitting strategies that can be used to account for PTV/OAR trade-off during real-time MLC tracking.« less
Estimation of Quasi-Stiffness of the Human Knee in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
Biomechanical data characterizing the quasi-stiffness of lower-limb joints during human locomotion is limited. Understanding joint stiffness is critical for evaluating gait function and designing devices such as prostheses and orthoses intended to emulate biological properties of human legs. The knee joint moment-angle relationship is approximately linear in the flexion and extension stages of stance, exhibiting nearly constant stiffnesses, known as the quasi-stiffnesses of each stage. Using a generalized inverse dynamics analysis approach, we identify the key independent variables needed to predict knee quasi-stiffness during walking, including gait speed, knee excursion, and subject height and weight. Then, based on the identified key variables, we used experimental walking data for 136 conditions (speeds of 0.75–2.63 m/s) across 14 subjects to obtain best fit linear regressions for a set of general models, which were further simplified for the optimal gait speed. We found R2 > 86% for the most general models of knee quasi-stiffnesses for the flexion and extension stages of stance. With only subject height and weight, we could predict knee quasi-stiffness for preferred walking speed with average error of 9% with only one outlier. These results provide a useful framework and foundation for selecting subject-specific stiffness for prosthetic and exoskeletal devices designed to emulate biological knee function during walking. PMID:23533662
Does childhood motor skill proficiency predict adolescent fitness?
Barnett, Lisa M; Van Beurden, Eric; Morgan, Philip J; Brooks, Lyndon O; Beard, John R
2008-12-01
To determine whether childhood fundamental motor skill proficiency predicts subsequent adolescent cardiorespiratory fitness. In 2000, children's proficiency in a battery of skills was assessed as part of an elementary school-based intervention. Participants were followed up during 2006/2007 as part of the Physical Activity and Skills Study, and cardiorespiratory fitness was measured using the Multistage Fitness Test. Linear regression was used to examine the relationship between childhood fundamental motor skill proficiency and adolescent cardiorespiratory fitness controlling for gender. Composite object control (kick, catch, throw) and locomotor skill (hop, side gallop, vertical jump) were constructed for analysis. A separate linear regression examined the ability of the sprint run to predict cardiorespiratory fitness. Of the 928 original intervention participants, 481 were in 28 schools, 276 (57%) of whom were assessed. Two hundred and forty-four students (88.4%) completed the fitness test. One hundred and twenty-seven were females (52.1%), 60.1% of whom were in grade 10 and 39.0% were in grade 11. As children, almost all 244 completed each motor assessments, except for the sprint run (n = 154, 55.8%). The mean composite skill score in 2000 was 17.7 (SD 5.1). In 2006/2007, the mean number of laps on the Multistage Fitness Test was 50.5 (SD 24.4). Object control proficiency in childhood, adjusting for gender (P = 0.000), was associated with adolescent cardiorespiratory fitness (P = 0.012), accounting for 26% of fitness variation. Children with good object control skills are more likely to become fit adolescents. Fundamental motor skill development in childhood may be an important component of interventions aiming to promote long-term fitness.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
The H,G_1,G_2 photometric system with scarce observational data
NASA Astrophysics Data System (ADS)
Penttilä, A.; Granvik, M.; Muinonen, K.; Wilkman, O.
2014-07-01
The H,G_1,G_2 photometric system was officially adopted at the IAU General Assembly in Beijing, 2012. The system replaced the H,G system from 1985. The 'photometric system' is a parametrized model V(α; params) for the magnitude-phase relation of small Solar System bodies, and the main purpose is to predict the magnitude at backscattering, H := V(0°), i.e., the (absolute) magnitude of the object. The original H,G system was designed using the best available data in 1985, but since then new observations have been made showing certain features, especially near backscattering, to which the H,G function has troubles adjusting to. The H,G_1,G_2 system was developed especially to address these issues [1]. With a sufficient number of high-accuracy observations and with a wide phase-angle coverage, the H,G_1,G_2 system performs well. However, with scarce low-accuracy data the system has troubles producing a reliable fit, as would any other three-parameter nonlinear function. Therefore, simultaneously with the H,G_1,G_2 system, a two-parameter version of the model, the H,G_{12} system, was introduced [1]. The two-parameter version ties the parameters G_1,G_2 into a single parameter G_{12} by a linear relation, and still uses the H,G_1,G_2 system in the background. This version dramatically improves the possibility to receive a reliable phase-curve fit to scarce data. The amount of observed small bodies is increasing all the time, and so is the need to produce estimates for the absolute magnitude/diameter/albedo and other size/composition related parameters. The lack of small-phase-angle observations is especially topical for near-Earth objects (NEOs). With these, even the two- parameter version faces problems. The previous procedure with the H,G system in such circumstances has been that the G-parameter has been fixed to some constant value, thus only fitting a single-parameter function. In conclusion, there is a definitive need for a reliable procedure to produce photometric fits to very scarce and low-accuracy data. There are a few details that should be considered with the H,G_1,G_2 or H,G_{12} systems with scarce data. The first point is the distribution of errors in the fit. The original H,G system allowed linear regression in the flux space, thus making the estimation computationally easier. The same principle was repeated with the H,G_1,G_2 system. There is, however, a major hidden assumption in the transformation. With regression modeling, the residuals should be distributed symmetrically around zero. If they are normally distributed, even better. We have noticed that, at least with some NEO observations, the residuals in the flux space are far from symmetric, and seem to be much more symmetric in the magnitude space. The result is that the nonlinear fit in magnitude space is far more reliable than the linear fit in the flux space. Since the computers and nonlinear regression algorithms are efficient enough, we conclude that, in many cases, with low-accuracy data the nonlinear fit should be favored. In fact, there are statistical procedures that should be employed with the photometric fit. At the moment, the choice between the three-parameter and two-parameter versions is simply based on subjective decision-making. By checking parameter error and model comparison statistics, the choice could be done objectively. Similarly, the choice between the linear fit in flux space and the nonlinear fit in magnitude space should be based on a statistical test of unbiased residuals. Furthermore, the so-called Box-Cox transform could be employed to find an optimal transformation somewhere between the magnitude and flux spaces. The H,G_1,G_2 system is based on cubic splines, and is therefore a bit more complicated to implement than a system with simpler basis functions. The same applies to a complete program that would automatically choose the best transforms to data, test if two- or three-parameter version of the model should be fitted, and produce the fitted parameters with their error estimates. Our group has already made implementations of the H,G_1,G_2 system publicly available [2]. We plan to implement the abovementioned improvements to the system and make also these tools public.
NASA Astrophysics Data System (ADS)
Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.
2018-01-01
To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.
Prediction of optimum sorption isotherm: comparison of linear and non-linear method.
Kumar, K Vasanth; Sivanesan, S
2005-11-11
Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
The Radial Variation of the Solar Wind Temperature-Speed Relationship
NASA Astrophysics Data System (ADS)
Elliott, H. A.; McComas, D. J.
2010-12-01
Generally, the solar wind temperature (T) and speed (V) are well correlated except in Interplanetary Coronal Mass Ejections where this correlation breaks down. We have shown that at 1 AU the speed-temperature relationship is often well represented by a linear fit for a speed range spanning both the slow and fast wind. By examining all of the ACE and OMNI measurements, we found that when coronal holes are large the fast wind can have a different T-V relationship than the slow wind. The best example of this was in 2003 when there was a very large and long-lived outward polarity coronal hole at low latitudes. The long-lived nature of the hole made it possible to clearly distinguish that large holes can have a different T-V relationship. We found it to be rare that holes are large enough and last long enough to have enough data points to clearly demonstrate this effect. In this study we compare the 2003 coronal hole observations from ACE with the Ulysses polar coronal hole measurements. In an even earlier ACE study we found that both the compressions and rarefactions curves are linear, but the compression curve is shifted to higher temperatures. In this presentation we use Helios, Ulysses, and ACE measurements to examine how the T-V relationship varies with distance. The dynamic evolution of the solar wind parameters is revealed when we first separate compressions and rarefactions and then determine the radial profiles of the solar wind parameters. We find that T-V relationship varies with distance and in particular beyond 3 AU the differences between the compressions and rarefactions are quite important and at such distances a simple linear fit does not represent the T-V distribution very well.
Guckenberger, Matthias; Klement, Rainer Johannes; Allgäuer, Michael; Appold, Steffen; Dieckmann, Karin; Ernst, Iris; Ganswindt, Ute; Holy, Richard; Nestle, Ursula; Nevinny-Stickel, Meinhard; Semrau, Sabine; Sterzing, Florian; Wittig, Andrea; Andratschke, Nicolaus; Flentje, Michael
2013-10-01
To compare the linear-quadratic (LQ) and the LQ-L formalism (linear cell survival curve beyond a threshold dose dT) for modeling local tumor control probability (TCP) in stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC). This study is based on 395 patients from 13 German and Austrian centers treated with SBRT for stage I NSCLC. The median number of SBRT fractions was 3 (range 1-8) and median single fraction dose was 12.5 Gy (2.9-33 Gy); dose was prescribed to the median 65% PTV encompassing isodose (60-100%). Assuming an α/β-value of 10 Gy, we modeled TCP as a sigmoid-shaped function of the biologically effective dose (BED). Models were compared using maximum likelihood ratio tests as well as Bayes factors (BFs). There was strong evidence for a dose-response relationship in the total patient cohort (BFs>20), which was lacking in single-fraction SBRT (BFs<3). Using the PTV encompassing dose or maximum (isocentric) dose, our data indicated a LQ-L transition dose (dT) at 11 Gy (68% CI 8-14 Gy) or 22 Gy (14-42 Gy), respectively. However, the fit of the LQ-L models was not significantly better than a fit without the dT parameter (p=0.07, BF=2.1 and p=0.86, BF=0.8, respectively). Generally, isocentric doses resulted in much better dose-response relationships than PTV encompassing doses (BFs>20). Our data suggest accurate modeling of local tumor control in fractionated SBRT for stage I NSCLC with the traditional LQ formalism. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Blood biomarkers in male and female participants after an Ironman-distance triathlon.
Danielsson, Tom; Carlsson, Jörg; Schreyer, Hendrik; Ahnesjö, Jonas; Ten Siethoff, Lasse; Ragnarsson, Thony; Tugetam, Åsa; Bergman, Patrick
2017-01-01
While overall physical activity is clearly associated with a better short-term and long-term health, prolonged strenuous physical activity may result in a rise in acute levels of blood-biomarkers used in clinical practice for diagnosis of various conditions or diseases. In this study, we explored the acute effects of a full Ironman-distance triathlon on biomarkers related to heart-, liver-, kidney- and skeletal muscle damage immediately post-race and after one week's rest. We also examined if sex, age, finishing time and body composition influenced the post-race values of the biomarkers. A sample of 30 subjects was recruited (50% women) to the study. The subjects were evaluated for body composition and blood samples were taken at three occasions, before the race (T1), immediately after (T2) and one week after the race (T3). Linear regression models were fitted to analyse the independent contribution of sex and finishing time controlled for weight, body fat percentage and age, on the biomarkers at the termination of the race (T2). Linear mixed models were fitted to examine if the biomarkers differed between the sexes over time (T1-T3). Being male was a significant predictor of higher post-race (T2) levels of myoglobin, CK, and creatinine levels and body weight was negatively associated with myoglobin. In general, the models were unable to explain the variation of the dependent variables. In the linear mixed models, an interaction between time (T1-T3) and sex was seen for myoglobin and creatinine, in which women had a less pronounced response to the race. Overall women appear to tolerate the effects of prolonged strenuous physical activity better than men as illustrated by their lower values of the biomarkers both post-race as well as during recovery.
Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert
2016-01-01
Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040
Carsin-Vu, Aline; Corouge, Isabelle; Commowick, Olivier; Bouzillé, Guillaume; Barillot, Christian; Ferré, Jean-Christophe; Proisy, Maia
2018-04-01
To investigate changes in cerebral blood flow (CBF) in gray matter (GM) between 6 months and 15 years of age and to provide CBF values for the brain, GM, white matter (WM), hemispheres and lobes. Between 2013 and 2016, we retrospectively included all clinical MRI examinations with arterial spin labeling (ASL). We excluded subjects with a condition potentially affecting brain perfusion. For each subject, mean values of CBF in the brain, GM, WM, hemispheres and lobes were calculated. GM CBF was fitted using linear, quadratic and cubic polynomial regression against age. Regression models were compared with Akaike's information criterion (AIC), and Likelihood Ratio tests. 84 children were included (44 females/40 males). Mean CBF values were 64.2 ± 13.8 mL/100 g/min in GM, and 29.3 ± 10.0 mL/100 g/min in WM. The best-fit model of brain perfusion was the cubic polynomial function (AIC = 672.7, versus respectively AIC = 673.9 and AIC = 674.1 with the linear negative function and the quadratic polynomial function). A statistically significant difference between the tested models demonstrating the superiority of the quadratic (p = 0.18) or cubic polynomial model (p = 0.06), over the negative linear regression model was not found. No effect of general anesthesia (p = 0.34) or of gender (p = 0.16) was found. we provided values for ASL CBF in the brain, GM, WM, hemispheres, and lobes over a wide pediatric age range, approximately showing inverted U-shaped changes in GM perfusion over the course of childhood. Copyright © 2018 Elsevier B.V. All rights reserved.
An Alternative to the Breeder’s and Lande’s Equations
Houchmandzadeh, Bahram
2013-01-01
The breeder’s equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) − E(Z0) to the selection differential S = E(ZW) − E(Z0) through a simple proportionality relation R = h2S, where the heritability coefficient h2 is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R′ = j2S′ can be derived, regardless of the parental genotype distribution. Here R′ = E(Z1) − μ and S′ = E(ZW) − μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j2 is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande’s equation Δz¯=GP−1S. The linearity coefficient of the alternative equation are not changed by Gaussian selection. PMID:24212080
Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert
2016-01-01
Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.
Equilibrium, kinetics and process design of acid yellow 132 adsorption onto red pine sawdust.
Can, Mustafa
2015-01-01
Linear and non-linear regression procedures have been applied to the Langmuir, Freundlich, Tempkin, Dubinin-Radushkevich, and Redlich-Peterson isotherms for adsorption of acid yellow 132 (AY132) dye onto red pine (Pinus resinosa) sawdust. The effects of parameters such as particle size, stirring rate, contact time, dye concentration, adsorption dose, pH, and temperature were investigated, and interaction was characterized by Fourier transform infrared spectroscopy and field emission scanning electron microscope. The non-linear method of the Langmuir isotherm equation was found to be the best fitting model to the equilibrium data. The maximum monolayer adsorption capacity was found as 79.5 mg/g. The calculated thermodynamic results suggested that AY132 adsorption onto red pine sawdust was an exothermic, physisorption, and spontaneous process. Kinetics was analyzed by four different kinetic equations using non-linear regression analysis. The pseudo-second-order equation provides the best fit with experimental data.
Aerobic fitness does not modify the effect of FTO variation on body composition traits.
Huuskonen, Antti; Lappalainen, Jani; Oksala, Niku; Santtila, Matti; Häkkinen, Keijo; Kyröläinen, Heikki; Atalay, Mustafa
2012-01-01
Poor physical fitness and obesity are risk factors for all cause morbidity and mortality. We aimed to clarify whether common genetic variants of key energy intake determinants in leptin (LEP), leptin receptor (LEPR), and fat mass and obesity-associated (FTO) are associated with aerobic and neuromuscular performance, and whether aerobic fitness can alter the effect of these genotypes on body composition. 846 healthy Finnish males of Caucasian origin were genotyped for FTO (rs8050136), LEP (rs7799039) and LEPR (rs8179183 and rs1137101) single nucleotide polymorphisms (SNPs), and studied for associations with maximal oxygen consumption, body fat percent, serum leptin levels, waist circumference and maximal force of leg extensor muscles. Genotype AA of the FTO SNP rs8050136 associated with higher BMI and greater waist circumference compared to the genotype CC. In general linear model, no significant interaction for FTO genotype-relative VO(2)max (mL·kg(-1)·min(-1)) or FTO genotype-absolute VO(2)max (L·min(-1)) on BMI or waist circumference was found. Main effects of aerobic performance on body composition traits were significant (p<0.001). Logistic regression modelling found no significant interaction between aerobic fitness and FTO genotype. LEP SNP rs7799039, LEPR SNPs rs8179183 and rs1137101 did not associate with any of the measured variables, and no significant interactions of LEP or LEPR genotype with aerobic fitness were observed. In addition, none of the studied SNPs associated with aerobic or neuromuscular performance. Aerobic fitness may not modify the effect of FTO variation on body composition traits. However, relative aerobic capacity associates with lower BMI and waist circumference regardless of the FTO genotype. FTO, LEP and LEPR genotypes unlikely associate with physical performance.
Maternal heterozygosity and progeny fitness association in an inbred Scots pine population.
Abrahamsson, S; Ahlinder, J; Waldmann, P; García-Gil, M R
2013-03-01
Associations between heterozygosity and fitness traits have typically been investigated in populations characterized by low levels of inbreeding. We investigated the associations between standardized multilocus heterozygosity (stMLH) in mother trees (obtained from12 nuclear microsatellite markers) and five fitness traits measured in progenies from an inbred Scots pine population. The traits studied were proportion of sound seed, mean seed weight, germination rate, mean family height of one-year old seedlings under greenhouse conditions (GH) and mean family height of three-year old seedlings under field conditions (FH). The relatively high average inbreeding coefficient (F) in the population under study corresponds to a mixture of trees with different levels of co-ancestry, potentially resulting from a recent bottleneck. We used both frequentist and Bayesian methods of polynomial regression to investigate the presence of linear and non-linear relations between stMLH and each of the fitness traits. No significant associations were found for any of the traits except for GH, which displayed negative linear effect with stMLH. Negative HFC for GH could potentially be explained by the effect of heterosis caused by mating of two inbred mother trees (Lippman and Zamir 2006), or outbreeding depression at the most heterozygote trees and its negative impact on the fitness of the progeny, while their simultaneous action is also possible (Lynch. 1991). However,since this effect wasn't detected for FH, we cannot either rule out that the greenhouse conditions introduce artificial effects that disappear under more realistic field conditions.
Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?
NASA Astrophysics Data System (ADS)
Gasbarro, Andrew
2018-03-01
In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.
Estimation of Quasi-Stiffness of the Human Hip in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
This work presents a framework for selection of subject-specific quasi-stiffness of hip orthoses and exoskeletons, and other devices that are intended to emulate the biological performance of this joint during walking. The hip joint exhibits linear moment-angular excursion behavior in both the extension and flexion stages of the resilient loading-unloading phase that consists of terminal stance and initial swing phases. Here, we establish statistical models that can closely estimate the slope of linear fits to the moment-angle graph of the hip in this phase, termed as the quasi-stiffness of the hip. Employing an inverse dynamics analysis, we identify a series of parameters that can capture the nearly linear hip quasi-stiffnesses in the resilient loading phase. We then employ regression analysis on experimental moment-angle data of 216 gait trials across 26 human adults walking over a wide range of gait speeds (0.75–2.63 m/s) to obtain a set of general-form statistical models that estimate the hip quasi-stiffnesses using body weight and height, gait speed, and hip excursion. We show that the general-form models can closely estimate the hip quasi-stiffness in the extension (R2 = 92%) and flexion portions (R2 = 89%) of the resilient loading phase of the gait. We further simplify the general-form models and present a set of stature-based models that can estimate the hip quasi-stiffness for the preferred gait speed using only body weight and height with an average error of 27% for the extension stage and 37% for the flexion stage. PMID:24349136
Mathcad in the Chemistry Curriculum Symbolic Software in the Chemistry Curriculum
NASA Astrophysics Data System (ADS)
Zielinski, Theresa Julia
2000-05-01
Physical chemistry is such a broad discipline that the topics we expect average students to complete in two semesters usually exceed their ability for meaningful learning. Consequently, the number and kind of topics and the efficiency with which students can learn them are important concerns. What topics are essential and what can we do to provide efficient and effective access to those topics? How do we accommodate the fact that students come to upper-division chemistry courses with a variety of nonuniformly distributed skills, a bit of calculus, and some physics studied one or more years before physical chemistry? The critical balance between depth and breadth of learning in courses and curricula may be achieved through appropriate use of technology and especially through the use of symbolic mathematics software. Software programs such as Mathcad, Mathematica, and Maple, however, have learning curves that diminish their effectiveness for novices. There are several ways to address the learning curve conundrum. First, basic instruction in the software provided during laboratory sessions should be followed by requiring laboratory reports that use the software. Second, one should assign weekly homework that requires the software and builds student skills within the discipline and with the software. Third, a complementary method, supported by this column, is to provide students with Mathcad worksheets or templates that focus on one set of related concepts and incorporate a variety of features of the software that they are to use to learn chemistry. In this column we focus on two significant topics for young chemists. The first is curve-fitting and the statistical analysis of the fitting parameters. The second is the analysis of the rotation/vibration spectrum of a diatomic molecule, HCl. A broad spectrum of Mathcad documents exists for teaching chemistry. One collection of 50 documents can be found at http://www.monmouth.edu/~tzielins/mathcad/Lists/index.htm. Another collection of peer-reviewed documents is developing through this column at the JCE Internet Web site, http://jchemed.chem.wisc.edu/JCEWWW/Features/ McadInChem/index.html. With this column we add three peer-reviewed and tested Mathcad documents to the JCE site. In Linear Least-Squares Regression, Sidney H. Young and Andrzej Wierzbicki demonstrate various implicit and explicit methods for determining the slope and intercept of the regression line for experimental data. The document shows how to determine the standard deviation for the slope, the intercept, and the standard deviation of the overall fit. Students are next given the opportunity to examine the confidence level for the fit through the Student's t-test. Examination of the residuals of the fit leads students to explore the possibility of rejecting points in a set of data. The document concludes with a discussion of and practice with adding a quadratic term to create a polynomial fit to a set of data and how to determine if the quadratic term is statistically significant. There is full documentation of the various steps used throughout the exposition of the statistical concepts. Although the statistical methods presented in this worksheet are generally accessible to average physical chemistry students, an instructor would be needed to explain the finer points of the matrix methods used in some sections of the worksheet. The worksheet is accompanied by a set of data for students to use to practice the techniques presented. It would be worthwhile for students to spend one or two laboratory periods learning to use the concepts presented and then to apply them to experimental data they have collected for themselves. Any linear or linearizable data set would be appropriate for use with this Mathcad worksheet. Alternatively, instructors may select sections of the document suited to the skill level of their students and the laboratory tasks at hand. In a second Mathcad document, Non-Linear Least-Squares Regression, Young and Wierzbicki introduce the basic concepts of nonlinear curve-fitting and develop the techniques needed to fit a variety of mathematical functions to experimental data. This approach is especially important when mathematical models for chemical processes cannot be linearized. In Mathcad the Levenberg-Marquardt algorithm is used to determine the best fitting parameters for a particular mathematical model. As in linear least-squares, the goal of the fitting process is to find the values for the fitting parameters that minimize the sum of the squares of the deviations between the data and the mathematical model. Students are asked to determine the fitting parameters, use the Hessian matrix to compute the standard deviation of the fitting parameters, test for the significance of the parameters using Student's t-test, use residual analysis to test for data points to remove, and repeat the calculations for another set of data. The nonlinear least-squares procedure follows closely on the pattern set up for linear least-squares by the same authors (see above). If students master the linear least-squares worksheet content they will be able to master the nonlinear least-squares technique (see also refs 1, 2). In the third document, The Analysis of the Vibrational Spectrum of a Linear Molecule by Richard Schwenz, William Polik, and Sidney Young, the authors build on the concepts presented in the curve fitting worksheets described above. This vibrational analysis document, which supports a classic experiment performed in the physical chemistry laboratory, shows how a Mathcad worksheet can increase the efficiency by which a set of complicated manipulations for data reduction can be made more accessible for students. The increase in efficiency frees up time for students to develop a fuller understanding of the physical chemistry concepts important to the interpretation of spectra and understanding of bond vibrations in general. The analysis of the vibration/rotation spectrum for a linear molecule worksheet builds on the rich literature for this topic (3). Before analyzing their own spectral data, students practice and learn the concepts and methods of the HCl spectral analysis by using the fundamental and first harmonic vibrational frequencies provided by the authors. This approach has a fundamental pedagogical advantage. Most explanations in laboratory texts are very concise and lack mathematical details required by average students. This Mathcad worksheet acts as a tutor; it guides students through the essential concepts for data reduction and lets them focus on learning important spectroscopic concepts. The Mathcad worksheet is amply annotated. Students who have moderate skill with the software and have learned about regression analysis from the curve-fitting worksheets described in this column will be able to complete and understand their analysis of the IR spectrum of HCl. The three Mathcad worksheets described here stretch the physical chemistry curriculum by presenting important topics in forms that students can use with only moderate Mathcad skills. The documents facilitate learning by giving students opportunities to interact with the material in meaningful ways in addition to using the documents as sources of techniques for building their own data-reduction worksheets. However, working through these Mathcad worksheets is not a trivial task for the average student. Support needs to be provided by the instructor to ease students through more advanced mathematical and Mathcad processes. These worksheets raise the question of how much we can ask diligent students to do in one course and how much time they need to spend to master the essential concepts of that course. The Mathcad documents and associated PDF versions are available at the JCE Internet WWW site. The Mathcad documents require Mathcad version 6.0 or higher and the PDF files require Adobe Acrobat. Every effort has been made to make the documents fully compatible across the various Mathcad versions. Users may need to refer to Mathcad manuals for functions that vary with the Mathcad version number. Literature Cited 1. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill: New York, 1969. 2. Zielinski, T. J.; Allendoerfer, R. D. J. Chem. Educ. 1997, 74, 1001. 3. Schwenz, R. W.; Polik, W. F. J. Chem. Educ. 1999, 76, 1302.
NASA Astrophysics Data System (ADS)
Yadav, Manish; Singh, Nitin Kumar
2017-12-01
A comparison of the linear and non-linear regression method in selecting the optimum isotherm among three most commonly used adsorption isotherms (Langmuir, Freundlich, and Redlich-Peterson) was made to the experimental data of fluoride (F) sorption onto Bio-F at a solution temperature of 30 ± 1 °C. The coefficient of correlation (r2) was used to select the best theoretical isotherm among the investigated ones. A total of four Langmuir linear equations were discussed and out of which linear form of most popular Langmuir-1 and Langmuir-2 showed the higher coefficient of determination (0.976 and 0.989) as compared to other Langmuir linear equations. Freundlich and Redlich-Peterson isotherms showed a better fit to the experimental data in linear least-square method, while in non-linear method Redlich-Peterson isotherm equations showed the best fit to the tested data set. The present study showed that the non-linear method could be a better way to obtain the isotherm parameters and represent the most suitable isotherm. Redlich-Peterson isotherm was found to be the best representative (r2 = 0.999) for this sorption system. It is also observed that the values of β are not close to unity, which means the isotherms are approaching the Freundlich but not the Langmuir isotherm.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.
Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Comparing the Fit of Item Response Theory and Factor Analysis Models
ERIC Educational Resources Information Center
Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo
2011-01-01
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R
2017-06-01
Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.
2017-01-01
We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
Trends in asthma mortality in the 0- to 4-year and 5- to 34-year age groups in Brazil
Graudenz, Gustavo Silveira; Carneiro, Dominique Piacenti; Vieira, Rodolfo de Paula
2017-01-01
ABSTRACT Objective: To provide an update on trends in asthma mortality in Brazil for two age groups: 0-4 years and 5-34 years. Methods: Data on mortality from asthma, as defined in the International Classification of Diseases, were obtained for the 1980-2014 period from the Mortality Database maintained by the Information Technology Department of the Brazilian Unified Health Care System. To analyze time trends in standardized asthma mortality rates, we conducted an ecological time-series study, using regression models for the 0- to 4-year and 5- to 34-year age groups. Results: There was a linear trend toward a decrease in asthma mortality in both age groups, whereas there was a third-order polynomial fit in the general population. Conclusions: Although asthma mortality showed a consistent, linear decrease in individuals ≤ 34 years of age, the rate of decline was greater in the 0- to 4-year age group. The 5- to 34-year group also showed a linear decline in mortality, and the rate of that decline increased after the year 2004, when treatment with inhaled corticosteroids became more widely available. The linear decrease in asthma mortality found in both age groups contrasts with the nonlinear trend observed in the general population of Brazil. The introduction of inhaled corticosteroid use through public policies to control asthma coincided with a significant decrease in asthma mortality rates in both subsets of individuals over 5 years of age. The causes of this decline in asthma-related mortality in younger age groups continue to constitute a matter of debate. PMID:28380185
Non-linearity of the collagen triple helix in solution and implications for collagen function.
Walker, Kenneth T; Nan, Ruodan; Wright, David W; Gor, Jayesh; Bishop, Anthony C; Makhatadze, George I; Brodsky, Barbara; Perkins, Stephen J
2017-06-16
Collagen adopts a characteristic supercoiled triple helical conformation which requires a repeating (Xaa-Yaa-Gly) n sequence. Despite the abundance of collagen, a combined experimental and atomistic modelling approach has not so far quantitated the degree of flexibility seen experimentally in the solution structures of collagen triple helices. To address this question, we report an experimental study on the flexibility of varying lengths of collagen triple helical peptides, composed of six, eight, ten and twelve repeats of the most stable Pro-Hyp-Gly (POG) units. In addition, one unblocked peptide, (POG) 10unblocked , was compared with the blocked (POG) 10 as a control for the significance of end effects. Complementary analytical ultracentrifugation and synchrotron small angle X-ray scattering data showed that the conformations of the longer triple helical peptides were not well explained by a linear structure derived from crystallography. To interpret these data, molecular dynamics simulations were used to generate 50 000 physically realistic collagen structures for each of the helices. These structures were fitted against their respective scattering data to reveal the best fitting structures from this large ensemble of possible helix structures. This curve fitting confirmed a small degree of non-linearity to exist in these best fit triple helices, with the degree of bending approximated as 4-17° from linearity. Our results open the way for further studies of other collagen triple helices with different sequences and stabilities in order to clarify the role of molecular rigidity and flexibility in collagen extracellular and immune function and disease. © 2017 The Author(s).
A generalized target theory and its applications.
Zhao, Lei; Mi, Dong; Hu, Bei; Sun, Yeqing
2015-09-28
Different radiobiological models have been proposed to estimate the cell-killing effects, which are very important in radiotherapy and radiation risk assessment. However, most applied models have their own scopes of application. In this work, by generalizing the relationship between "hit" and "survival" in traditional target theory with Yager negation operator in Fuzzy mathematics, we propose a generalized target model of radiation-induced cell inactivation that takes into account both cellular repair effects and indirect effects of radiation. The simulation results of the model and the rethinking of "the number of targets in a cell" and "the number of hits per target" suggest that it is only necessary to investigate the generalized single-hit single-target (GSHST) in the present theoretical frame. Analysis shows that the GSHST model can be reduced to the linear quadratic model and multitarget model in the low-dose and high-dose regions, respectively. The fitting results show that the GSHST model agrees well with the usual experimental observations. In addition, the present model can be used to effectively predict cellular repair capacity, radiosensitivity, target size, especially the biologically effective dose for the treatment planning in clinical applications.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
Model-free estimation of the psychometric function
Żychaluk, Kamila; Foster, David H.
2009-01-01
A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355
Auxiliary basis expansions for large-scale electronic structure calculations.
Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin
2005-05-10
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.
Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP
2016-01-01
Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Krumin, Michael; Shoham, Shy
2010-01-01
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705
Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes
Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María
2016-01-01
Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
NASA Astrophysics Data System (ADS)
Yu, C. X.; Xue, C.; Liu, J.; Hu, X. Y.; Liu, Y. Y.; Ye, W. H.; Wang, L. F.; Wu, J. F.; Fan, Z. F.
2018-01-01
In this article, multiple eigen-systems including linear growth rates and eigen-functions have been discovered for the Rayleigh-Taylor instability (RTI) by numerically solving the Sturm-Liouville eigen-value problem in the case of two-dimensional plane geometry. The system called the first mode has the maximal linear growth rate and is just extensively studied in literature. Higher modes have smaller eigen-values, but possess multi-peak eigen-functions which bring on multiple pairs of vortices in the vorticity field. A general fitting expression for the first four eigen-modes is presented. Direct numerical simulations show that high modes lead to appearances of multi-layered spike-bubble pairs, and lots of secondary spikes and bubbles are also generated due to the interactions between internal spikes and bubbles. The present work has potential applications in many research and engineering areas, e.g., in reducing the RTI growth during capsule implosions in inertial confinement fusion.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.
Park, Hyoung-Jun; Song, Minho
2008-10-29
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.
Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.
Hu, Yue; Allen, Genevera I
2015-12-01
Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.
TESSIM: a simulator for the Athena-X-IFU
NASA Astrophysics Data System (ADS)
Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; den Hartog, R. H.; Bandler, S. R.; de Plaa, J.; den Herder, J.-W. A.
2016-07-01
We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS- files which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http://www.sternwarte.uni-erlangen.de/research/sixte/).
TESSIM: A Simulator for the Athena-X-IFU
NASA Technical Reports Server (NTRS)
Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; Den Hartog, R. H.; Bandler, S. R.; De Plaa, J.;
2016-01-01
We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS-les which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http:www.sternwarte.uni-erlangen.deresearchsixte).
FAST TRACK COMMUNICATION: Finite-temperature magnetism in bcc Fe under compression
NASA Astrophysics Data System (ADS)
Sha, Xianwei; Cohen, R. E.
2010-09-01
We investigate the contributions of finite-temperature magnetic fluctuations to the thermodynamic properties of bcc Fe as functions of pressure. First, we apply a tight-binding total-energy model parameterized to first-principles linearized augmented plane-wave computations to examine various ferromagnetic, anti-ferromagnetic, and noncollinear spin spiral states at zero temperature. The tight-binding data are fit to a generalized Heisenberg Hamiltonian to describe the magnetic energy functional based on local moments. We then use Monte Carlo simulations to compute the magnetic susceptibility, the Curie temperature, heat capacity, and magnetic free energy. Including the finite-temperature magnetism improves the agreement with experiment for the calculated thermal expansion coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, T. Maurice; Robinson, Neil J.; Tsvelik, Alexei M.
Here, the high-temperature normal state of the unconventional cuprate superconductors has resistivity linear in temperature T, which persists to values well beyond the Mott-Ioffe-Regel upper bound. At low temperatures, within the pseudogap phase, the resistivity is instead quadratic in T, as would be expected from Fermi liquid theory. Developing an understanding of these normal phases of the cuprates is crucial to explain the unconventional superconductivity. We present a simple explanation for this behavior, in terms of the umklapp scattering of electrons. This fits within the general picture emerging from functional renormalization group calculations that spurred the Yang-Rice-Zhang ansatz: Umklapp scatteringmore » is at the heart of the behavior in the normal phase.« less
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.; Matthews, Alex; Kumar, P.; Lu, Edward
1991-01-01
It was discovered that the nonlinear evolution of the two point correlation function in N-body experiments of galaxy clustering with Omega = 1 appears to be described to good approximation by a simple general formula. The underlying form of the formula is physically motivated, but its detailed representation is obtained empirically by fitting to N-body experiments. In this paper, the formula is presented along with an inverse formula which converts a final, nonlinear correlation function into the initial linear correlation function. The inverse formula is applied to observational data from the CfA, IRAs, and APM galaxy surveys, and the initial spectrum of fluctuations of the universe, if Omega = 1.
Effects of an 8-Week Aerobic Dance Program on Health-Related Fitness in Patients With Schizophrenia.
Cheng, Shu-Li; Sun, Huey-Fang; Yeh, Mei-Ling
2017-12-01
Both psychiatric symptoms and the side effects of medication significantly affect patients with schizophrenia. These effects frequently result in a sedentary lifestyle and weight gain, which increase the risk of cardiovascular disease and premature death. This study developed an aerobic dance program for patients with schizophrenia and then evaluated the effect of this program on health-related fitness outcomes. An experimental research design was used. Sixty patients with schizophrenia were recruited from a daycare ward and rehabilitation center at a psychiatric hospital in Taiwan. Paticipants were assigned randomly into an experimental group, which received the 8-week aerobic dance program intervention, and a control group, which received no intervention. All of the participants were assessed in terms of the outcome variables, which included bodyweight, body mass index, muscular endurance, flexibility, and cardiorespiratory endurance. These variables were measured before the intervention (pretest) as well as at 8 weeks (posttest) and 12 weeks (follow-up) after the intervention. This study used a generalized linear model with a generalized estimating equation method to account for the dependence of repeated measurements and to explore the effects of the intervention on health-related fitness outcomes. Twenty-six participants were in the experimental group, and 28 were in the control group. Significant between-group differences were observed at posttest and in the follow-up for all of the health-related fitness outcomes with the exception of muscular endurance. This study suggests that an 8-week aerobic dance program may be an effective intervention in patients with schizophrenia in terms of improving bodyweight, body mass index, flexibility, and cardiorespiratory endurance for a period of at least 4 months. Furthermore, although muscular endurance was postively affected during the short-term period, the benefits did not extend into the follow-up examination. On the basis of these findings, aerobic dance is recommended as a nonpharmacological intervention for patients with schizophrenia who are in daycare or rehabilitation settings.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Paper-cutting operations using scissors in Drury's law tasks.
Yamanaka, Shota; Miyashita, Homei
2018-05-01
Human performance modeling is a core topic in ergonomics. In addition to deriving models, it is important to verify the kinds of tasks that can be modeled. Drury's law is promising for path tracking tasks such as navigating a path with pens or driving a car. We conducted an experiment based on the observation that paper-cutting tasks using scissors resemble such tasks. The results showed that cutting arc-like paths (1/4 of a circle) showed an excellent fit with Drury's law (R 2 > 0.98), whereas cutting linear paths showed a worse fit (R 2 > 0.87). Since linear paths yielded better fits when path amplitudes were divided (R 2 > 0.99 for all amplitudes), we discuss the characteristics of paper-cutting operations using scissors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Linear Combination Fitting (LCF)-XANES analysis of As speciation in selected mine-impacted materials
This table provides sample identification labels and classification of sample type (tailings, calcinated, grey slime). For each sample, total arsenic and iron concentrations determined by acid digestion and ICP analysis are provided along with arsenic in-vitro bioaccessibility (As IVBA) values to estimate arsenic risk. Lastly, the table provides linear combination fitting results from synchrotron XANES analysis showing the distribution of arsenic speciation phases present in each sample along with fitting error (R-factor).This dataset is associated with the following publication:Ollson, C., E. Smith, K. Scheckel, A. Betts, and A. Juhasz. Assessment of arsenic speciation and bioaccessibility in mine-impacted materials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 313: 130-137, (2016).
Rothenberg, Stephen J; Rothenberg, Jesse C
2005-09-01
Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Fit of single tooth zirconia copings: comparison between various manufacturing processes.
Grenade, Charlotte; Mainjot, Amélie; Vanheusden, Alain
2011-04-01
Various CAD/CAM processes are commercially available to manufacture zirconia copings. Comparative data on their performance in terms of fit are needed. The purpose of this in vitro study was to compare the internal and marginal fit of single tooth zirconia copings manufactured with a CAD/CAM process (Procera; Nobel Biocare) and a mechanized manufacturing process (Ceramill; Amann Girrbach). Abutments (n=20) prepared in vivo for ceramic crowns served as a template for manufacturing both Procera and Ceramill zirconia copings. Copings were manufactured and cemented (Clearfil Esthetic Cement; Kuraray) on epoxy replicas of stone cast abutments. Specimens were sectioned. Nine measurements were performed for each coping. Over- and under-extended margins were evaluated. Comparisons between the 2 processes were performed with a generalized linear mixed model (α=.05). Internal gap values between Procera and Ceramill groups were not significantly different (P=.13). The mean marginal gap (SD) for Procera copings (51(50) μm) was significantly smaller than for Ceramill (81(66) μm) (P<.005). The percentages of over- and under-extended margins were 43% and 57% for Procera respectively, and 71% and 29% for Ceramill. Within the limitations of this in vitro study, the marginal fit of Procera copings was significantly better than that of Ceramill copings. Furthermore, Procera copings showed a smaller percentage of over-extended margins than did Ceramill copings. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Kolu, Päivi; Tokola, Kari; Kankaanpää, Markku; Suni, Jaana
2017-06-01
A cross-sectional study, part of a randomized controlled trial. To evaluate the association of physical activity, cardiorespiratory fitness, and neuromuscular fitness with direct healthcare costs and sickness-related absence among nursing personnel with nonspecific low back pain. Low back pain creates a huge economic burden due to increased sick leave and use of healthcare services. Female nursing personnel with nonspecific low back pain were included (n = 219). Physical activity was assessed with accelerometry and a questionnaire. In addition, measurements of cardiorespiratory and muscular fitness were conducted. Direct costs and sickness-related absence for a 6-month period were collected retrospectively by questionnaire. Health care utilization and absence from work were analyzed with a general linear model. The mean total costs were 80.5% lower among women who met physical activity recommendations than inactive women. Those with a higher mean daily intensity level of 10-minute activity sessions showed lower total costs than women in the lowest tertile (middle: 64.0% of the lowest; highest: 54.3% of the lowest). Women with good cardiorespiratory fitness (the highest tertile) as measured with the 6-minute-walk test (based on walking distance) had 77.0% lower total costs when compared with the lowest tertile. Women in the highest third for the modified push-up test had 84.0% lower total costs than those with the poorest results (the bottom tertile). High cardiorespiratory and muscular fitness and meeting physical activity recommendations for aerobic and muscular fitness were strongly associated with lower total costs among nursing personnel with pain-related disorders of recurrent nonspecific low back pain. Actions to increase physical activity and muscle conditioning may significantly save on healthcare costs and decrease sick-leave costs due to low back pain.
Hoekstra, Sven; Valent, Linda; Gobets, David; van der Woude, Lucas; de Groot, Sonja
2017-08-01
Recognizing the encouraging effect of challenging events, the HandbikeBattle (HBB) was created to promote exercise among wheelchair users. The purpose of this study was to reveal the effects on physical fitness and health outcomes of four-month handbike training under free-living conditions in preparation for the event. In this prospective cohort study, 59 relatively inexperienced handyclists participated in the HBB of 2013 or 2014. Incremental exercise tests were conducted, respiratory function was tested and anthropometrics were measured before and after the preparation period. Main outcome measures were peak power output (POpeak), peak oxygen uptake (VO2peak) and waist circumference, of which the changes were tested using repeated measures ANOVA. To detect possible determinants of changes in physical fitness, a linear regression analysis was conducted with personal characteristics, executed training volume and upper-extremity complaints during the training period as independent variables. POpeak, VO2peak and waist circumference improved significantly with 17%, 7% and 4.1%, respectively. None of the included variables were significant determinants for the changes in POpeak found as a result of the training. A challenging event such as the HBB provokes training regimes among participants of sufficient load to realize substantial improvements in physical fitness and health outcomes. Implications for Rehabilitation Due to the often impaired muscle function in the lower-limbs and an inactive lifestyle, wheelchair users generally show considerably lower levels of fitness compared to able-bodied individuals. This prospective cohort study showed that four months of handbike training under free-living conditions in preparation for this event resulted in substantial improvements in physical fitness and health outcomes in wheelchair users. The creation of a challenging event such as the HandbikeBattle as part of a follow-up rehabilitation practice can therefore be a useful tool to help wheelchair users initiate or keep training to improve their physical fitness and health.
NASA Astrophysics Data System (ADS)
Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.
Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.
Connaughton, Catherine; McCabe, Marita; Karantzas, Gery
2016-03-01
Research to validate models of sexual response empirically in men with and without sexual dysfunction (MSD), as currently defined, is limited. To explore the extent to which the traditional linear or the Basson circular model best represents male sexual response for men with MSD and sexually functional men. In total, 573 men completed an online questionnaire to assess sexual function and aspects of the models of sexual response. In total, 42.2% of men (242) were sexually functional, and 57.8% (331) had at least one MSD. Models were built and tested using bootstrapping and structural equation modeling. Fit of models for men with and without MSD. The linear model and the initial circular model were a poor fit for men with and without MSD. A modified version of the circular model demonstrated adequate fit for the two groups and showed important interactions between psychological factors and sexual response for men with and without MSD. Male sexual response was not represented by the linear model for men with or without MSD, excluding possible healthy responsive desire. The circular model provided a better fit for the two groups of men but demonstrated that the relations between psychological factors and phases of sexual response were different for men with and without MSD as currently defined. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Dynamical properties of maps fitted to data in the noise-free limit
Lindström, Torsten
2013-01-01
We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079
Some Statistics for Assessing Person-Fit Based on Continuous-Response Models
ERIC Educational Resources Information Center
Ferrando, Pere Joan
2010-01-01
This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin
2013-12-18
A 10-parameter, range-separated hybrid (RSH), generalized gradient approximation (GGA) density functional with nonlocal correlation (VV10) is presented in this paper. Instead of truncating the B97-type power series inhomogeneity correction factors (ICF) for the exchange, same-spin correlation, and opposite-spin correlation functionals uniformly, all 16 383 combinations of the linear parameters up to fourth order (m = 4) are considered. These functionals are individually fit to a training set and the resulting parameters are validated on a primary test set in order to identify the 3 optimal ICF expansions. Through this procedure, it is discovered that the functional that performs best onmore » the training and primary test sets has 7 linear parameters, with 3 additional nonlinear parameters from range-separation and nonlocal correlation. The resulting density functional, ωB97X-V, is further assessed on a secondary test set, the parallel-displaced coronene dimer, as well as several geometry datasets. Finally and furthermore, the basis set dependence and integration grid sensitivity of ωB97X-V are analyzed and documented in order to facilitate the use of the functional.« less
NASA Astrophysics Data System (ADS)
Gonçalves, Karen dos Santos; Winkler, Mirko S.; Benchimol-Barbosa, Paulo Roberto; de Hoogh, Kees; Artaxo, Paulo Eduardo; de Souza Hacon, Sandra; Schindler, Christian; Künzli, Nino
2018-07-01
Epidemiological studies generally use particulate matter measurements with diameter less 2.5 μm (PM2.5) from monitoring networks. Satellite aerosol optical depth (AOD) data has considerable potential in predicting PM2.5 concentrations, and thus provides an alternative method for producing knowledge regarding the level of pollution and its health impact in areas where no ground PM2.5 measurements are available. This is the case in the Brazilian Amazon rainforest region where forest fires are frequent sources of high pollution. In this study, we applied a non-linear model for predicting PM2.5 concentration from AOD retrievals using interaction terms between average temperature, relative humidity, sine, cosine of date in a period of 365,25 days and the square of the lagged relative residual. Regression performance statistics were tested comparing the goodness of fit and R2 based on results from linear regression and non-linear regression for six different models. The regression results for non-linear prediction showed the best performance, explaining on average 82% of the daily PM2.5 concentrations when considering the whole period studied. In the context of Amazonia, it was the first study predicting PM2.5 concentrations using the latest high-resolution AOD products also in combination with the testing of a non-linear model performance. Our results permitted a reliable prediction considering the AOD-PM2.5 relationship and set the basis for further investigations on air pollution impacts in the complex context of Brazilian Amazon Region.
Campos, Rafael Viegas; Cobuci, Jaime Araujo; Kern, Elisandra Lurdes; Costa, Cláudio Napolis; McManus, Concepta Margaret
2015-04-01
The objective of this study was to estimate genetic and phenotypic parameters for linear type traits, as well as milk yield (MY), fat yield (FY) and protein yield (PY) in 18,831 Holstein cows reared in 495 herds in Brazil. Restricted maximum likelihood with a bivariate model was used for estimation genetic parameters, including fixed effects of herd-year of classification, period of classification, classifier and stage of lactation for linear type traits and herd-year of calving, season of calving and lactation order effects for production traits. The age of cow at calving was fitted as a covariate (with linear and quadratic terms), common to both models. Heritability estimates varied from 0.09 to 0.38 for linear type traits and from 0.17 to 0.24 for production traits, indicating sufficient genetic variability to achieve genetic gain through selection. In general, estimates of genetic correlations between type and production traits were low, except for udder texture and angularity that showed positive genetic correlations (>0.29) with MY, FY, and PY. Udder depth had the highest negative genetic correlation (-0.30) with production traits. Selection for final score, commonly used by farmers as a practical selection tool to improve type traits, does not lead to significant improvements in production traits, thus the use of selection indices that consider both sets of traits (production and type) seems to be the most adequate to carry out genetic selection of animals in the Brazilian herd.
Campos, Rafael Viegas; Cobuci, Jaime Araujo; Kern, Elisandra Lurdes; Costa, Cláudio Napolis; McManus, Concepta Margaret
2015-01-01
The objective of this study was to estimate genetic and phenotypic parameters for linear type traits, as well as milk yield (MY), fat yield (FY) and protein yield (PY) in 18,831 Holstein cows reared in 495 herds in Brazil. Restricted maximum likelihood with a bivariate model was used for estimation genetic parameters, including fixed effects of herd-year of classification, period of classification, classifier and stage of lactation for linear type traits and herd-year of calving, season of calving and lactation order effects for production traits. The age of cow at calving was fitted as a covariate (with linear and quadratic terms), common to both models. Heritability estimates varied from 0.09 to 0.38 for linear type traits and from 0.17 to 0.24 for production traits, indicating sufficient genetic variability to achieve genetic gain through selection. In general, estimates of genetic correlations between type and production traits were low, except for udder texture and angularity that showed positive genetic correlations (>0.29) with MY, FY, and PY. Udder depth had the highest negative genetic correlation (−0.30) with production traits. Selection for final score, commonly used by farmers as a practical selection tool to improve type traits, does not lead to significant improvements in production traits, thus the use of selection indices that consider both sets of traits (production and type) seems to be the most adequate to carry out genetic selection of animals in the Brazilian herd. PMID:25656190
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Impact of kerogen heterogeneity on sorption of organic pollutants. 2. Sorption equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.; Yu, Z.Q.; Xiao, B.H.
2009-08-15
Phenanthrene and naphthalene sorption isotherms were measured for three different series of kerogen materials using completely mixed batch reactors. Sorption isotherms were nonlinear for each sorbate-sorbent system, and the Freundlich isotherm equation fit the sorption data well. The Freundlich isotherm linearity parameter n ranged from 0.192 to 0.729 for phenanthrene and from 0.389 to 0.731 for naphthalene. The n values correlated linearly with rigidity and aromaticity of the kerogen matrix, but the single-point, organic carbon-normalized distribution coefficients varied dramatically among the tested sorbents. A dual-mode sorption equation consisting of a linear partitioning domain and a Langmuir adsorption domain adequately quantifiedmore » the overall sorption equilibrium for each sorbent-sorbate system. Both models fit the data well, with r{sup 2} values of 0.965 to 0.996 for the Freundlich model and 0.963 to 0.997 for the dual-mode model for the phenanthrene sorption isotherms. The dual-mode model fitting results showed that as the rigidity and aromaticity of the kerogen matrix increased, the contribution of the linear partitioning domain to the overall sorption equilibrium decreased, whereas the contribution of the Langmuir adsorption domain increased. The present study suggested that kerogen materials found in soils and sediments should not be treated as a single, unified, carbonaceous sorbent phase.« less
Yamamura, S; Momose, Y
2001-01-16
A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.
Weedon, Benjamin David; Liu, Francesca; Mahmoud, Wala; Metz, Renske; Beunder, Kyle; Delextrat, Anne; Morris, Martyn G; Esser, Patrick; Collett, Johnny; Meaney, Andy; Howells, Ken; Dawes, Helen
2018-01-01
Motor competence (MC) is an important factor in the development of health and fitness in adolescence. This cross-sectional study aims to explore the distribution of MC across school students aged 13-14 years old and the extent of the relationship of MC to measures of health and fitness across genders. A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13-14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=-2.21, degrees of freedom=307, P=0.03, 95% CI -0.253 to -0.011). Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. NCT02517333.
Recent Changes in Pgopher: a General Purpose Program for Simulating Rotational Structure
NASA Astrophysics Data System (ADS)
Western, Colin
2010-06-01
Key features of the PGOPHER program include the simulation and fitting of the rotational structure of linear molecules and symmetric and asymmetric tops, including effects due to unpaired electrons and nuclear spin. The program is written to be as general as possible, and can handle many effects such as multiple interacting states, predissociation and multiphoton transitions. It is designed to be easy to use, with a flexible graphical user interface. PGOPHER has been released as an open source program, and can be freely downloaded from the website at http://pgopher.chm.bris.ac.uk. Recent additions include a mode which allows the calculation of vibrational energy levels starting from a harmonic model and the multidimensional Franck-Condon factors required to calculate intensities of vibronic transitions. PGOPHER takes account of both the displacement along normal co-ordinates and mixing between modes (the Duschinsky effect). l matrices produced from ab initio programs can be directly read by PGOPHER or the mode displacements and mixing can be fit to observed spectra. In addition the effects of external electric and/or magnetic fields can now be calculated, including plots of energy level against electric field suitable for predicting Stark deceleration, focussing and trapping of molecules. The figure shows a typical plot, showing the electric field tuning of the M = 0 levels of 202, 111 and 110 levels of (NO)_2. Other new features include fits to combination differences, simulation of the Doppler split peak typical of Fourier transform microwave spectroscopy, specifying a nuclear spin temperature independent of rotational temperature and interactive adjustment of parameter values with the mouse in addition to typing values.
NASA Astrophysics Data System (ADS)
Durkalec, A.; Le Fèvre, O.; Pollo, A.; de la Torre, S.; Cassata, P.; Garilli, B.; Le Brun, V.; Lemaux, B. C.; Maccagni, D.; Pentericci, L.; Tasca, L. A. M.; Thomas, R.; Vanzella, E.; Zamorani, G.; Zucca, E.; Amorín, R.; Bardelli, S.; Cassarà, L. P.; Castellano, M.; Cimatti, A.; Cucciati, O.; Fontana, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Ilbert, O.; Paltani, S.; Ribeiro, B.; Schaerer, D.; Scodeggio, M.; Sommariva, V.; Talia, M.; Tresse, L.; Vergani, D.; Capak, P.; Charlot, S.; Contini, T.; Cuby, J. G.; Dunlop, J.; Fotopoulou, S.; Koekemoer, A.; López-Sanjuan, C.; Mellier, Y.; Pforr, J.; Salvato, M.; Scoville, N.; Taniguchi, Y.; Wang, P. W.
2015-11-01
We investigate the evolution of galaxy clustering for galaxies in the redshift range 2.0
Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques
Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...
Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R
2017-03-01
We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Williams, Sarah E; Carroll, Douglas; Veldhuijzen van Zanten, Jet J C S; Ginty, Annie T
2016-03-15
Higher cardiorespiratory fitness is associated with lower trait anxiety, but research has not examined whether fitness is associated with state anxiety levels and the interpretation of these symptoms. The aim of this paper was to (1) reexamine the association between cardiorespiratory fitness and general anxiety and (2) examine anxiety intensity and perceptions of these symptoms prior to an acute psychological stress task. Participants (N=185; 81% female; Mage=18.04, SD=0.43 years) completed a 10-minute Paced Serial Addition Test. General anxiety was assessed using the anxiety subscale of the Hospital Anxiety Depression Scale. Cognitive and somatic anxiety intensity and perceptions of symptoms was assessed immediately prior to the acute psychological stress task using the Immediate Anxiety Measures Scale. Cardiorespiratory fitness was calculated using a validated standardized formula. Higher levels of cardiorespiratory fitness were associated with lower levels of general anxiety. Path analysis supported a model whereby perceptions of anxiety symptoms mediated the relationship between cardiorespiratory fitness and levels of anxiety experienced during the stress task; results remained significant after adjusting for general anxiety levels. Specifically, higher levels of cardiorespiratory fitness were positively associated with more positive perceptions of anxiety symptoms and lower levels of state anxiety. A standard formula rather than maximal testing was used to assess cardiorespiratory fitness, self-report questionnaires were used to assess anxiety, and the study was cross-sectional in design. Results suggest a potential mechanism explaining how cardiorespiratory fitness can reduce anxiety levels. Copyright © 2016 Elsevier B.V. All rights reserved.
Induction of Chromosomal Aberrations at Fluences of Less Than One HZE Particle per Cell Nucleus
NASA Technical Reports Server (NTRS)
Hada, Megumi; Chappell, Lori J.; Wang, Minli; George, Kerry A.; Cucinotta, Francis A.
2014-01-01
The assumption of a linear dose response used to describe the biological effects of high LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high energy and charge (HZE) nuclei. Human fibroblast and lymphocyte cells where irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with O (77 keV/ (long-s)m), Si (99 keV/ (long-s)m), Fe (175 keV/ (long-s)m), Fe (195 keV/ (long-s)m) or Fe (240 keV/ (long-s)m) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Non-linear regression models were used to evaluate possible linear and non-linear dose response models based on these data. Dose responses for simple exchanges for human fibroblast irradiated under confluent culture conditions were best fit by non-linear models motivated by a non-targeted effect (NTE). Best fits for the dose response data for human lymphocytes irradiated in blood tubes were a NTE model for O and a linear response model fit best for Si and Fe particles. Additional evidence for NTE were found in low dose experiments measuring gamma-H2AX foci, a marker of double strand breaks (DSB), and split-dose experiments with human fibroblasts. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high LET radiation at the relevant range of low doses.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
A New Metrics for Countries' Fitness and Products' Complexity
NASA Astrophysics Data System (ADS)
Tacchella, Andrea; Cristelli, Matthieu; Caldarelli, Guido; Gabrielli, Andrea; Pietronero, Luciano
2012-10-01
Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line
NASA Technical Reports Server (NTRS)
Otoshi, Tom Y.
1994-01-01
A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
40 CFR 90.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...
40 CFR 91.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...
40 CFR 90.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...
40 CFR 91.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...
40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...
40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
An in-situ Raman study on pristane at high pressure and ambient temperature
NASA Astrophysics Data System (ADS)
Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei
2018-01-01
The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
PyFDAP: automated analysis of fluorescence decay after photoconversion (FDAP) experiments.
Bläßle, Alexander; Müller, Patrick
2015-03-15
We developed the graphical user interface PyFDAP for the fitting of linear and non-linear decay functions to data from fluorescence decay after photoconversion (FDAP) experiments. PyFDAP structures and analyses large FDAP datasets and features multiple fitting and plotting options. PyFDAP was written in Python and runs on Ubuntu Linux, Mac OS X and Microsoft Windows operating systems. The software, a user guide and a test FDAP dataset are freely available for download from http://people.tuebingen.mpg.de/mueller-lab. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference
Park, Hyoung-Jun; Song, Minho
2008-01-01
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898
Auxiliary basis expansions for large-scale electronic structure calculations
Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin
2005-01-01
One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767
Knecht, William R
2013-11-01
Is there a "killing zone" (Craig, 2001)-a range of pilot flight time over which general aviation (GA) pilots are at greatest risk? More broadly, can we predict accident rates, given a pilot's total flight hours (TFH)? These questions interest pilots, aviation policy makers, insurance underwriters, and researchers alike. Most GA research studies implicitly assume that accident rates are linearly related to TFH, but that relation may actually be multiply nonlinear. This work explores the ability of serial nonlinear modeling functions to predict GA accident rates from noisy rate data binned by TFH. Two sets of National Transportation Safety Board (NTSB)/Federal Aviation Administration (FAA) data were log-transformed, then curve-fit to a gamma-pdf-based function. Despite high rate-noise, this produced weighted goodness-of-fit (Rw(2)) estimates of .654 and .775 for non-instrument-rated (non-IR) and instrument-rated pilots (IR) respectively. Serial-nonlinear models could be useful to directly predict GA accident rates from TFH, and as an independent variable or covariate to control for flight risk during data analysis. Applied to FAA data, these models imply that the "killing zone" may be broader than imagined. Relatively high risk for an individual pilot may extend well beyond the 2000-h mark before leveling off to a baseline rate. Published by Elsevier Ltd.
Zhang, Hao; Niu, Yue; Yao, Yili; Chen, Renjie; Zhou, Xianghong; Kan, Haidong
2018-02-28
The evidence concerning the acute effects of ambient air pollution on various respiratory diseases was limited in China, and the attributable medical expenditures were largely unknown. From 2013 to 2015, we collected data on the daily visits to the emergency- and outpatient-department for five main respiratory diseases and their medical expenditures in Shanghai, China. We used the overdispersed generalized additive model together with distributed lag models to fit the associations of criteria air pollutants with hospital visits, and used the linear models to fit the associations with medical expenditures. Generally, we observed significant increments in emergency visits (8.81-17.26%) and corresponding expenditures (0.33-25.81%) for pediatric respiratory diseases, upper respiratory infection (URI), and chronic obstructive pulmonary disease (COPD) for an interquartile range increase of air pollutant concentrations over four lag days. As a comparison, there were significant but smaller increments in outpatient visits (1.36-4.52%) and expenditures (1.38-3.18%) for pediatric respiratory diseases and upper respiratory infection (URI). No meaningful changes were observed for asthma and lower respiratory infection. Our study suggested that short-term exposure to outdoor air pollution may induce the occurrences or exacerbation of pediatric respiratory diseases, URI, and COPD, leading to considerable medical expenditures upon the patients.
McAuley, E; Duncan, T; Tammen, V V
1989-03-01
The present study was designed to assess selected psychometric properties of the Intrinsic Motivation Inventory (IMI) (Ryan, 1982), a multidimensional measure of subjects' experience with regard to experimental tasks. Subjects (N = 116) competed in a basketball free-throw shooting game, following which they completed the IMI. The LISREL VI computer program was employed to conduct a confirmatory factor analysis to assess the tenability of a five factor hierarchical model representing four first-order factors or dimensions and a second-order general factor representing intrinsic motivation. Indices of model acceptability tentatively suggest that the sport data adequately fit the hypothesized five factor hierarchical model. Alternative models were tested but did not result in significant improvements in the goodness-of-fit indices, suggesting the proposed model to be the most accurate of the models tested. Coefficient alphas for the four dimensions and the overall scale indicated adequate reliability. The results are discussed with regard to the importance of accurate assessment of psychological constructs and the use of linear structural equations in confirming the factor structures of measures.
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Hsu, Wei-Hsiu; Chen, Chi-lung; Kuo, Liang Tseng; Fan, Chun-Hao; Lee, Mel S; Hsu, Robert Wen-Wei
2014-01-01
Background Health-related fitness has been reported to be associated with improved quality of life (QoL) in the elderly. Health-related fitness is comprised of several dimensions that could be enhanced by specific training regimens. It has remained unclear how various dimensions of health-related fitness interact with QoL in postmenopausal women. Objective The purpose of the current study was to investigate the relationship between the dimensions of health-related fitness and QoL in elderly women. Methods A cohort of 408 postmenopausal women in a rural area of Taiwan was prospectively collected. Dimensions of health-related fitness, consisting of muscular strength, balance, cardiorespiratory endurance, flexibility, muscle endurance, and agility, were assessed. QoL was determined using the Short Form Health Survey (SF-36). Differences between age groups (stratified by decades) were calculated using a one-way analysis of variance (ANOVA) and multiple comparisons using a Scheffé test. A Spearman’s correlation analysis was performed to examine differences between QoL and each dimension of fitness. Multiple linear regression with forced-entry procedure was performed to evaluate the effects of health-related fitness. A P-value of <0.05 was considered statistically significant. Results Age-related decreases in health-related fitness were shown for sit-ups, back strength, grip strength, side steps, trunk extension, and agility (P<0.05). An age-related decrease in QoL, specifically in physical functioning, role limitation due to physical problems, and physical component score, was also demonstrated (P<0.05). Multiple linear regression analyses demonstrated that back strength significantly contributed to the physical component of QoL (adjusted beta of 0.268 [P<0.05]). Conclusion Back strength was positively correlated with the physical component of QoL among the examined dimensions of health-related fitness. Health-related fitness, as well as the physical component of QoL, declined with increasing age. PMID:25258526
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers
ERIC Educational Resources Information Center
Whyte, Jemma Catherine; Bull, Rebecca
2008-01-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…
A simplified competition data analysis for radioligand specific activity determination.
Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A
1990-01-01
Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.
ERIC Educational Resources Information Center
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Health and Fitness Through Physical Activity.
ERIC Educational Resources Information Center
Pollock, Michael L.; And Others
A synthesis of research findings in exercise and physical fitness is presented to provide the general public with insights into establishing an individualized exercise program. The material is divided into seven subtopics: (1) a general overview of the need for exercise and fitness and how it is an integral part of preventive medicine programs;…
Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise
2017-08-25
The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors
Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437
Temporal and radial variation of the solar wind temperature-speed relationship
NASA Astrophysics Data System (ADS)
Elliott, H. A.; Henney, C. J.; McComas, D. J.; Smith, C. W.; Vasquez, B. J.
2012-09-01
The solar wind temperature (T) and speed (V) are generally well correlated at ˜1 AU, except in Interplanetary Coronal Mass Ejections where this correlation breaks down. We perform a comprehensive analysis of both the temporal and radial variation in the temperature-speed (T-V) relationship of the non-transient wind, and our analysis provides insight into both the causes of the T-V relationship and the sources of the temperature variability. Often at 1 AU the speed-temperature relationship is well represented by a single linear fit over a speed range spanning both the slow and fast wind. However, at times the fast wind from coronal holes can have a different T-V relationship than the slow wind. A good example of this was in 2003 when there was a very large and long-lived outward magnetic polarity coronal hole at low latitudes that emitted wind with speeds as fast as a polar coronal hole. The long-lived nature of the hole made it possible to clearly distinguish that some holes can have a different T-V relationship. In an earlier ACE study, we found that both the compressions and rarefactions T-V curves are linear, but the compression curve is shifted to higher temperatures. By separating compressions and rarefactions prior to determining the radial profiles of the solar wind parameters, the importance of dynamic interactions on the radial evolution of the solar wind parameters is revealed. Although the T-V relationship at 1 AU is often well described by a single linear curve, we find that the T-V relationship continually evolves with distance. Beyond ˜2.5 AU the differences between the compressions and rarefactions are quite significant and affect the shape of the overall T-V distribution to the point that a simple linear fit no longer describes the distribution well. Since additional heating of the ambient solar wind outside of interaction regions can be associated with Alfvénic fluctuations and the turbulent energy cascade, we also estimate the heating rate radial profile from the solar wind speed and temperature measurements.
Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M
2014-01-01
Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast, multichannel WDRC often leads to poor music quality, whereas linear processing or slow WDRC are generally preferred. Furthermore, the effect of WDRC is more important for music preferences than music-industry CL applied to signals before the hearing-aid input stage. Variability in hearing-aid users' perceptions of music quality may be partially explained by frequency resolution abilities.
A state-based probabilistic model for tumor respiratory motion prediction
NASA Astrophysics Data System (ADS)
Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth
2010-12-01
This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.
A SIGNIFICANCE TEST FOR THE LASSO1
Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert
2014-01-01
In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062
PREdator: a python based GUI for data analysis, evaluation and fitting
2014-01-01
The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.
NASA Astrophysics Data System (ADS)
Mittal, R.; Rao, P.; Kaur, P.
2018-01-01
Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.
Accurate formula for gaseous transmittance in the infrared.
Gibson, G A; Pierluissi, J H
1971-07-01
By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.
Computer modeling the fatigue crack growth rate behavior of metals in corrosive environments
NASA Technical Reports Server (NTRS)
Richey, Edward, III; Wilson, Allen W.; Pope, Jonathan M.; Gangloff, Richard P.
1994-01-01
The objective of this task was to develop a method to digitize FCP (fatigue crack propagation) kinetics data, generally presented in terms of extensive da/dN-Delta K pairs, to produce a file for subsequent linear superposition or curve-fitting analysis. The method that was developed is specific to the Numonics 2400 Digitablet and is comparable to commercially available software products as Digimatic(sup TM 4). Experiments demonstrated that the errors introduced by the photocopying of literature data, and digitization, are small compared to those inherent in laboratory methods to characterize FCP in benign and aggressive environments. The digitizing procedure was employed to obtain fifteen crack growth rate data sets for several aerospace alloys in aggressive environments.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
Meteorological factors and the time of onset of chest pain in acute myocardial infarction
NASA Astrophysics Data System (ADS)
Thompson, David R.; Pohl, Jurgen E.; Tse, Yiu-Yu S.; Hiorns, Robert W.
1996-09-01
Analysis of the time of onset of chest pain in 2254 patients with a myocardial infarction admitted to a coronary care unit in Leicester during a 10-year period shows an association with temperature and humidity. During both the most cold and humid times of the year, the relationship is a strong one. A generalized linear model with a log link was used to fit the data and the backward elimination selection procedure suggested a humid, cold day might help to trigger the occurrence of myocardial infarction. In addition, cold weather was found to have a stronger effect on the male population while those men aged between 50 and 70 years were more sensitive to the effect of high humidity.
Landsat test of diffuse reflectance models for aquatic suspended solids measurement
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Alfoldi, T. T.
1979-01-01
Landsat radiance data were used to test mathematical models relating diffuse reflectance to aquatic suspended solids concentration. Digital CCT data for Landsat passes over the Bay of Fundy, Nova Scotia were analyzed on a General Electric Co. Image 100 multispectral analysis system. Three data sets were studied separately and together in all combinations with and without solar angle correction. Statistical analysis and chromaticity analysis show that a nonlinear relationship between Landsat radiance and suspended solids concentration is better at curve-fitting than a linear relationship. In particular, the quasi-single-scattering diffuse reflectance model developed by Gordon and coworkers is corroborated. The Gordon model applied to 33 points of MSS 5 data combined from three dates produced r = 0.98.
US EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS
The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...
Deriving the Regression Equation without Using Calculus
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2004-01-01
Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…
40 CFR 89.321 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...
40 CFR 89.321 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...
40 CFR 89.321 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...
40 CFR 89.321 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...
A microcomputer program for analysis of nucleic acid hybridization data
Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.
1982-01-01
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017
Exponential Correlation of IQ and the Wealth of Nations
ERIC Educational Resources Information Center
Dickerson, Richard E.
2006-01-01
Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…
ERIC Educational Resources Information Center
Pissanos, Becky W.; And Others
1983-01-01
Step-wise linear regressions were used to relate children's age, sex, and body composition to performance on basic motor abilities including balance, speed, agility, power, coordination, and reaction time, and to health-related fitness items including flexibility, muscle strength and endurance and cardiovascular functions. Eighty subjects were in…
Liu, Francesca; Mahmoud, Wala; Metz, Renske; Beunder, Kyle; Delextrat, Anne; Morris, Martyn G; Esser, Patrick; Collett, Johnny; Meaney, Andy; Howells, Ken; Dawes, Helen
2018-01-01
Introduction Motor competence (MC) is an important factor in the development of health and fitness in adolescence. Aims This cross-sectional study aims to explore the distribution of MC across school students aged 13–14 years old and the extent of the relationship of MC to measures of health and fitness across genders. Methods A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13–14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Results Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=−2.21, degrees of freedom=307, P=0.03, 95% CI −0.253 to –0.011). Conclusion Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. Trial registration number NCT02517333. PMID:29629179
In search of average growth: describing within-year oral reading fluency growth across Grades 1-8.
Nese, Joseph F T; Biancarosa, Gina; Cummings, Kelli; Kennedy, Patrick; Alonzo, Julie; Tindal, Gerald
2013-10-01
Measures of oral reading fluency (ORF) are perhaps the most often used assessment to monitor student progress as part of a response to intervention (RTI) model. Rates of growth in research and aim lines in practice are used to characterize student growth; in either case, growth is generally defined as linear, increasing at a constant rate. Recent research suggests ORF growth follows a nonlinear trajectory, but limitations related to the datasets used in such studies, composed of only three testing occasions, curtails their ability to examine the true functional form of ORF growth. The purpose of this study was to model within-year ORF growth using up to eight testing occasions for 1448 students in Grades 1 to 8 to assess (a) the average growth trajectory for within-year ORF growth, (b) whether students vary significantly in within-year ORF growth, and (c) the extent to which findings are consistent across grades. Results demonstrated that for Grades 1 to 7, a quadratic growth model fit better than either linear or cubic growth models, and for Grade 8, there was no substantial, stable growth. Findings suggest that the expectation for linear growth currently used in practice may be unrealistic. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Preliminary SPE Phase II Far Field Ground Motion Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steedman, David W.
2014-03-06
Phase II of the Source Physics Experiment (SPE) program will be conducted in alluvium. Several candidate sites were identified. These include existing large diameter borehole U1e. One criterion for acceptance is expected far field ground motion. In June 2013 we were requested to estimate peak response 2 km from the borehole due to the largest planned SPE Phase II experiment: a contained 50- Ton event. The cube-root scaled range for this event is 5423 m/KT 1/3. The generally accepted first order estimate of ground motions from an explosive event is to refer to the standard data base for explosive eventsmore » (Perrett and Bass, 1975). This reference is a compilation and analysis of ground motion data from numerous nuclear and chemical explosive events from Nevada National Security Site (formerly the Nevada Test Site, or NTS) and other locations. The data were compiled and analyzed for various geologic settings including dry alluvium, which we believe is an accurate descriptor for the SPE Phase II setting. The Perrett and Bass plots of peak velocity and peak yield-scaled displacement, both vs. yield-scaled range, are provided here. Their analysis of both variables resulted in bi-linear fits: a close-in non-linear regime and a more distant linear regime.« less
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Thermodynamic description of Hofmeister effects on the LCST of thermosensitive polymers.
Heyda, Jan; Dzubiella, Joachim
2014-09-18
Cosolvent effects on protein or polymer collapse transitions are typically discussed in terms of a two-state free energy change that is strictly linear in cosolute concentration. Here we investigate in detail the nonlinear thermodynamic changes of the collapse transition occurring at the lower critical solution temperature (LCST) of the role-model polymer poly(N-isopropylacrylamide) [PNIPAM] induced by Hofmeister salts. First, we establish an equation, based on the second-order expansion of the two-state free energy in concentration and temperature space, which excellently fits the experimental LCST curves and enables us to directly extract the corresponding thermodynamic parameters. Linear free energy changes, grounded on generic excluded-volume mechanisms, are indeed found for strongly hydrated kosmotropes. In contrast, for weakly hydrated chaotropes, we find significant nonlinear changes related to higher order thermodynamic derivatives of the preferential interaction parameter between salts and polymer. The observed non-monotonic behavior of the LCST can then be understood from a not yet recognized sign change of the preferential interaction parameter with salt concentration. Finally, we find that solute partitioning models can possibly predict the linear free energy changes for the kosmotropes, but fail for chaotropes. Our findings cast strong doubt on their general applicability to protein unfolding transitions induced by chaotropes.
Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study
2012-01-01
Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034
Hearing aid fitting for visual and hearing impaired patients with Usher syndrome type IIa.
Hartel, B P; Agterberg, M J H; Snik, A F; Kunst, H P M; van Opstal, A J; Bosman, A J; Pennings, R J E
2017-08-01
Usher syndrome is the leading cause of hereditary deaf-blindness. Most patients with Usher syndrome type IIa start using hearing aids from a young age. A serious complaint refers to interference between sound localisation abilities and adaptive sound processing (compression), as present in today's hearing aids. The aim of this study was to investigate the effect of advanced signal processing on binaural hearing, including sound localisation. In this prospective study, patients were fitted with hearing aids with a nonlinear (compression) and linear amplification programs. Data logging was used to objectively evaluate the use of either program. Performance was evaluated with a speech-in-noise test, a sound localisation test and two questionnaires focussing on self-reported benefit. Data logging confirmed that the reported use of hearing aids was high. The linear program was used significantly more often (average use: 77%) than the nonlinear program (average use: 17%). The results for speech intelligibility in noise and sound localisation did not show a significant difference between type of amplification. However, the self-reported outcomes showed higher scores on 'ease of communication' and overall benefit, and significant lower scores on disability for the new hearing aids when compared to their previous hearing aids with compression amplification. Patients with Usher syndrome type IIa prefer a linear amplification over nonlinear amplification when fitted with novel hearing aids. Apart from a significantly higher logged use, no difference in speech in noise and sound localisation was observed between linear and nonlinear amplification with the currently used tests. Further research is needed to evaluate the reasons behind the preference for the linear settings. © 2016 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.
GWAS with longitudinal phenotypes: performance of approximate procedures
Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel
2015-01-01
Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081
Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrer, C.; Kang, I.
2008-04-04
Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.
Phonation threshold pressure across the pitch range: preliminary test of a model.
Solomon, Nancy Pearl; Ramanathan, Pradeep; Makashay, Matthew J
2007-09-01
This study sought to examine the specific relationship between phonation threshold pressure (PTP) and voice fundamental frequency (F(0)) across the pitch range. A published theoretical model of this relationship described a quadratic equation, with PTP increasing exponentially with F(0). Prospective data from eight adults with normal, untrained voices were collected. Subjects produced their quietest phonation at 10 randomly ordered pitches from 5% to 95% of their semitone pitch range at 10% intervals. Analysis included curve fitting for individual and group data, as well as comparisons to the previous model. The group data fit a quadratic function similar to that proposed previously, but the specific quadratic coefficient and constant values differed. Four of the individual subjects' data were best fit by quartic functions, two by quadratic functions, and one by a linear function. This preliminary study indicates that PTP may be minimal at a "comfortable" pitch rather than the lowest pitch tested, and that, for some individuals, PTP may be slightly elevated during the passaggio between modal and falsetto vocal registers. These data support the general form of the theoretical PTP-F(0) function for these speakers, and indicate the possibility of potential refinements to the model. Future studies with larger groups of male and female subjects across a wider age range may eventually reveal the specific nature of the function.
Advancing School and Community Engagement Now for Disease Prevention (ASCEND).
Treu, Judith A; Doughty, Kimberly; Reynolds, Jesse S; Njike, Valentine Y; Katz, David L
2017-03-01
To compare two intensity levels (standard vs. enhanced) of a nutrition and physical activity intervention vs. a control (usual programs) on nutrition knowledge, body mass index, fitness, academic performance, behavior, and medication use among elementary school students. Quasi-experimental with three arms. Elementary schools, students' homes, and a supermarket. A total of 1487 third-grade students. The standard intervention (SI) provided daily physical activity in classrooms and a program on making healthful foods, using food labels. The enhanced intervention (EI) provided these plus additional components for students and their families. Body mass index (zBMI), food label literacy, physical fitness, academic performance, behavior, and medication use for asthma or attention-deficit hyperactivity disorder (ADHD). Multivariable generalized linear model and logistic regression to assess change in outcome measures. Both the SI and EI groups gained less weight than the control (p < .001), but zBMI did not differ between groups (p = 1.00). There were no apparent effects on physical fitness or academic performance. Both intervention groups improved significantly but similarly in food label literacy (p = .36). Asthma medication use was reduced significantly in the SI group, and nonsignificantly (p = .10) in the EI group. Use of ADHD medication remained unchanged (p = .34). The standard intervention may improve food label literacy and reduce asthma medication use in elementary school children, but an enhanced version provides no further benefit.
Prediction of textural attributes using color values of banana (Musa sapientum) during ripening.
Jaiswal, Pranita; Jha, Shyam Narayan; Kaur, Poonam Preet; Bhardwaj, Rishi; Singh, Ashish Kumar; Wadhawan, Vishakha
2014-06-01
Banana is an important sub-tropical fruit in international trade. It undergoes significant textural and color transformations during ripening process, which in turn influence the eating quality of the fruit. In present study, color ('L', 'a' and 'b' value) and textural attributes of bananas (peel, fruit and pulp firmness; pulp toughness; stickiness) were studied simultaneously using Hunter Color Lab and Texture Analyser, respectively, during ripening period of 10 days at ambient atmosphere. There was significant effect of ripening period on all the considered textural characteristics and color properties of bananas except color value 'b'. In general, textural descriptors (peel, fruit and pulp firmness; and pulp toughness) decreased during ripening except stickiness, while color values viz 'a' and 'b' increased with ripening barring 'L' value. Among various textural attributes, peel toughness and pulp firmness showed highest correlation (r) with 'a' value of banana peel. In order to predict textural properties using color values of banana, five types of equations (linear/polynomial/exponential/logarithmic/power) were fitted. Among them, polynomial equation was found to be the best fit (highest coefficient of determination, R(2)) for prediction of texture using color properties for bananas. The pulp firmness, peel toughness and pulp toughness showed R(2) above 0.84 with indicating its potentiality of the fitted equations for prediction of textural profile of bananas non-destructively using 'a' value.
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
NASA Astrophysics Data System (ADS)
Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia
2017-12-01
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method
ERIC Educational Resources Information Center
Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev
2018-01-01
The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
NASA Technical Reports Server (NTRS)
Hada, M.; George, Kerry; Cucinotta, Francis A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H (Inventor)
2015-01-01
A system and method of measuring a residence time in a gas-turbine engine is provided, whereby the method includes placing pressure sensors at a combustor entrance and at a turbine exit of the gas-turbine engine and measuring a combustor pressure at the combustor entrance and a turbine exit pressure at the turbine exit. The method further includes computing cross-spectrum functions between a combustor pressure sensor signal from the measured combustor pressure and a turbine exit pressure sensor signal from the measured turbine exit pressure, applying a linear curve fit to the cross-spectrum functions, and computing a post-combustion residence time from the linear curve fit.
Blood biomarkers in male and female participants after an Ironman-distance triathlon
Danielsson, Tom; Carlsson, Jörg; Schreyer, Hendrik; Ahnesjö, Jonas; Ten Siethoff, Lasse; Ragnarsson, Thony; Tugetam, Åsa
2017-01-01
Background While overall physical activity is clearly associated with a better short-term and long-term health, prolonged strenuous physical activity may result in a rise in acute levels of blood-biomarkers used in clinical practice for diagnosis of various conditions or diseases. In this study, we explored the acute effects of a full Ironman-distance triathlon on biomarkers related to heart-, liver-, kidney- and skeletal muscle damage immediately post-race and after one week’s rest. We also examined if sex, age, finishing time and body composition influenced the post-race values of the biomarkers. Methods A sample of 30 subjects was recruited (50% women) to the study. The subjects were evaluated for body composition and blood samples were taken at three occasions, before the race (T1), immediately after (T2) and one week after the race (T3). Linear regression models were fitted to analyse the independent contribution of sex and finishing time controlled for weight, body fat percentage and age, on the biomarkers at the termination of the race (T2). Linear mixed models were fitted to examine if the biomarkers differed between the sexes over time (T1-T3). Results Being male was a significant predictor of higher post-race (T2) levels of myoglobin, CK, and creatinine levels and body weight was negatively associated with myoglobin. In general, the models were unable to explain the variation of the dependent variables. In the linear mixed models, an interaction between time (T1-T3) and sex was seen for myoglobin and creatinine, in which women had a less pronounced response to the race. Conclusion Overall women appear to tolerate the effects of prolonged strenuous physical activity better than men as illustrated by their lower values of the biomarkers both post-race as well as during recovery. PMID:28609447
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
Laser plasma x-ray line spectra fitted using the Pearson VII function
NASA Astrophysics Data System (ADS)
Michette, A. G.; Pfauntsch, S. J.
2000-05-01
The Pearson VII function, which is more general than the Gaussian, Lorentzian and other profiles, is used to fit the x-ray spectral lines produced in a laser-generated plasma, instead of the more usual, but computationally expensive, Voigt function. The mean full-width half-maximum of the fitted lines is 0.102+/-0.014 nm, entirely consistent with the value expected from geometrical considerations, and the fitted line profiles are generally inconsistent with being either Lorentzian or Gaussian.
Ryberg, Karen R.; Vecchia, Aldo V.
2006-01-01
This report presents the results of a study conducted by the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, the Devils Lake Basin Joint Water Resource Board, and the Red River Joint Water Resource District, to analyze historical water-quality trends in three dissolved major ions, three nutrients, and one dissolved trace element for eight stations in the Devils Lake Basin in North Dakota and to develop an efficient sampling design to monitor the future trends. A multiple-regression model was used to detect and remove streamflow-related variability in constituent concentrations. To separate the natural variability in concentration as a result of variability in streamflow from the variability in concentration as a result of other factors, the base-10 logarithm of daily streamflow was divided into four components-a 5-year streamflow anomaly, an annual streamflow anomaly, a seasonal streamflow anomaly, and a daily streamflow anomaly. The constituent concentrations then were adjusted for streamflow-related variability by removing the 5-year, annual, seasonal, and daily variability. Constituents used for the water-quality trend analysis were evaluated for a step trend to examine the effect of Channel A on water quality in the basin and a linear trend to detect gradual changes with time from January 1980 through September 2003. The fitted upward linear trends for dissolved calcium concentrations during 1980-2003 for two stations were significant. The fitted step trends for dissolved sulfate concentrations for three stations were positive and similar in magnitude. Of the three upward trends, one was significant. The fitted step trends for dissolved chloride concentrations were positive but insignificant. The fitted linear trends for the upstream stations were small and insignificant, but three of the downward trends that occurred during 1980-2003 for the remaining stations were significant. The fitted upward linear trends for dissolved nitrite plus nitrate as nitrogen concentrations during 1987-2003 for two stations were significant. However, concentrations during recent years appear to be lower than those for the 1970s and early 1980s but higher than those for the late 1980s and early 1990s. The fitted downward linear trend for dissolved ammonia concentrations for one station was significant. The fitted linear trends for total phosphorus concentrations for two stations were significant. Upward trends for total phosphorus concentrations occurred from the late 1980s to 2003 for most stations, but a small and insignificant downward trend occurred for one station. Continued monitoring will be needed to determine if the recent trend toward higher dissolved nitrite plus nitrate as nitrogen and total phosphorus concentrations continues in the future. For continued monitoring of water-quality trends in the upper Devils Lake Basin, an efficient sampling design consists of five major-ion, nutrient, and trace-element samples per year at three existing stream stations and at three existing lake stations. This sampling design requires the collection of 15 stream samples and 15 lake samples per year rather than 16 stream samples and 20 lake samples per year as in the 1992-2003 program. Thus, the design would result in a program that is less costly and more efficient than the 1992-2003 program but that still would provide the data needed to monitor water-quality trends in the Devils Lake Basin.
The non-linear power spectrum of the Lyman alpha forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo
2015-12-01
The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less
Egg production forecasting: Determining efficient modeling approaches.
Ahmad, H A
2011-12-01
Several mathematical or statistical and artificial intelligence models were developed to compare egg production forecasts in commercial layers. Initial data for these models were collected from a comparative layer trial on commercial strains conducted at the Poultry Research Farms, Auburn University. Simulated data were produced to represent new scenarios by using means and SD of egg production of the 22 commercial strains. From the simulated data, random examples were generated for neural network training and testing for the weekly egg production prediction from wk 22 to 36. Three neural network architectures-back-propagation-3, Ward-5, and the general regression neural network-were compared for their efficiency to forecast egg production, along with other traditional models. The general regression neural network gave the best-fitting line, which almost overlapped with the commercial egg production data, with an R(2) of 0.71. The general regression neural network-predicted curve was compared with original egg production data, the average curves of white-shelled and brown-shelled strains, linear regression predictions, and the Gompertz nonlinear model. The general regression neural network was superior in all these comparisons and may be the model of choice if the initial overprediction is managed efficiently. In general, neural network models are efficient, are easy to use, require fewer data, and are practical under farm management conditions to forecast egg production.
Cosmological structure formation in Decaying Dark Matter models
NASA Astrophysics Data System (ADS)
Cheng, Dalong; Chu, M.-C.; Tang, Jiayu
2015-07-01
The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.
Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.
Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A
2017-03-01
Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.
Sex ratio variation in Iberian pigs.
Toro, M A; Fernández, A; García-Cortés, L A; Rodrigáñez, J; Silió, L
2006-06-01
Within the area of sex allocation, one of the topics that has attracted a lot of attention is the sex ratio problem. Fisher (1930) proposed that equal numbers of males and females have been promoted by natural selection and it has an adaptive significance. But the empirical success of Fisher's theory remains doubtful because a sex ratio of 0.50 is also expected from the chromosomal mechanism of sex determination. Another way of approaching the subject is to consider that Fisher's argument relies on the underlying assumption that offspring inherit their parent's tendency in biased sex ratio and therefore that genetic variance for this trait exists. Here, we analyzed sex ratio data of 56,807 piglets coming from 550 boars and 1893 dams. In addition to classical analysis of heterogeneity we performed analyses fitting linear and threshold animal models in a Bayesian framework using Gibbs sampling techniques. The marginal posterior mean of heritability was 2.63 x 10(-4) under the sire linear model and 9.17 x 10(-4) under the sire threshold model. The probability of the hypothesis p(h(2) = 0) fitting the last model was 0.996. Also, we did not detect any trend in sex ratio related to maternal age. From an evolutionary point of view, the chromosomal sex determination acts as a constraint that precludes control of offspring sex ratio in vertebrates and it should be included in the general theory of sex allocation. From a practical view that means that the sex ratio in domestic species is hardly susceptible to modification by artificial selection.
NASA Astrophysics Data System (ADS)
Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean
1991-06-01
We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
[Approach to the Development of Mind and Persona].
Sawaguchi, Toshiko
2018-01-01
To access medical specialists by health specialists working in the regional health field, the possibility of utilizing the voice approach for dissociative identity disorder (DID) patients as a health assessment for medical access (HAMA) was investigated. The first step is to investigate whether the plural personae in a single DID patient can be discriminated by voice analysis. Voices of DID patients including these with different personae were extracted from YouTube and were analysed using the software PRAAT with basic frequency, oral factors, chin factors and tongue factors. In addition, RAKUGO story teller voices made artificially and dramatically were analysed in the same manner. Quantitive and qualitative analysis method were carried out and nested logistic regression and a nested generalized linear model was developed. The voice from different personae in one DID patient could be visually and easily distinquished using basic frequency curve, cluster analysis and factor analysis. In the canonical analysis, only Roy's maximum root was <0.01. In the nested generalized linear model, the model using a standard deviation (SD) indicator fit best and some other possibilities are shown here. In DID patients, the short transition time among plural personae could guide to the risky situation such as suicide. So if the voice approach can show the time threshold of changes between the different personae, it would be useful as an Access Assessment in the form of a simple HAMA.
Characteristic operator functions for quantum input-plant-output models and coherent control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gough, John E.
We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less
Population decoding of motor cortical activity using a generalized linear model with hidden states.
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam
2010-06-15
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam
2010-01-01
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Searching for oscillations in the primordial power spectrum. II. Constraints from Planck data
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Spergel, David N.; Wandelt, Benjamin D.
2014-03-01
In this second of two papers we apply our recently developed code to search for resonance features in the Planck CMB temperature data. We search both for log-spaced oscillations or linear-spaced oscillations and compare our findings with results of our WMAP9 analysis and the Planck team analysis [P. A. R. Ade et al. (Planck Collaboration>), arXiv:1303.5082]. While there are hints of log-spaced resonant features present in the WMAP9 data, the significance of these features weaken with more data. With more accurate small scale measurements, we also find that the best-fit frequency has shifted and the amplitude has been reduced. We confirm the presence of a several low frequency peaks, earlier identified by the Planck team, but with a better improvement of fit (Δχeff2˜12). We further investigate this improvement by allowing the lensing potential to vary as well, showing mild correlation between the amplitude of the oscillations and the lensing amplitude. We find that the improvement of the fit increases even more (Δχeff2˜14) for the low frequencies that modify the spectrum in a way that mimics the lensing effect. Since these features were not present in the WMAP data, they are primarily due to better measurements of Planck at small angular scales. For linear-spaced oscillations we find a maximum Δχeff2˜13 scanning two orders of magnitude in frequency space, and the biggest improvements are at extremely high frequencies. Again, we recover a best-fit frequency very close to the one found in WMAP9, which confirms that the fit improvement is driven by low ℓ. Further comparisons with WMAP9 show Planck contains many more features, both for linear- and log-spaced oscillations, but with a smaller improvement of fit. We discuss the improvement as a function of the number of modes and study the effect of the 217 GHz map, which appears to drive most of the improvement for log-spaced oscillations. Two points strongly suggest that the detected features are fitting a combination of the noise and the dip at ℓ˜1800 in the 217 GHz map: the fit improvement mostly comes from a small range of ℓ, and comparison with simulations shows that the fit improvement is consistent with a statistical fluctuation. We conclude that none of the detected features are statistically significant.
Some constraints on levels of shear stress in the crust from observations and theory.
McGarr, A.
1980-01-01
In situ stress determinations in North America, southern Africa, and Australia indicate that on the average the maximum shear stress increases linearly with depth to at least 5.1 km measured in soft rock, such as shale and sandstone, and to 3.7 km in hard rock, including granite and quartzite. Regression lines fitted to the data yield gradients of 3.8 MPa/km and 6.6 MPa/km for soft and hard rock, respectively. Generally, the maximum shear stress in compressional states of stress for which the least principal stress is oriented near vertically is substantially greater than in extensional stress regimes, with the greatest principal stress in a vertical direction. The equations of equilibrium and compatibility can be used to provide functional constrains on the state of stress. If the stress is assumed to vary only with depth z in a given region, then all nonzero components must have the form A + Bz, where A and B are constants which generally differ for the various components. - Author
Dyer, Bryce; Hassani, Hossein; Shadi, Mehran
2016-01-01
The format of cycling time trials in England, Wales and Northern Ireland, involves riders competing individually over several fixed race distances of 10-100 miles in length and using time constrained formats of 12 and 24 h in duration. Drawing on data provided by the national governing body that covers the regions of England and Wales, an analysis of six male competition record progressions was undertaken to illustrate its progression. Future forecasts are then projected through use of the Singular Spectrum Analysis technique. This method has not been applied to sport-based time series data before. All six records have seen a progressive improvement and are non-linear in nature. Five records saw their highest level of record change during the 1950-1969 period. Whilst new record frequency generally has reduced since this period, the magnitude of performance improvement has generally increased. The Singular Spectrum Analysis technique successfully provided forecasted projections in the short to medium term with a high level of fit to the time series data.
Longitudinal excitations in Mg-Al-O refractory oxide melts studied by inelastic x-ray scattering.
Pozdnyakova, I; Hennet, L; Brun, J-F; Zanghi, D; Brassamin, S; Cristiglio, V; Price, D L; Albergamo, F; Bytchkov, A; Jahn, S; Saboungi, M-L
2007-03-21
The dynamic structure factor S(Q,omega) of the refractory oxide melts MgAl2O4 and MgAl4O7 is studied by inelastic x-ray scattering with aerodynamic levitation and laser heating. This technique allows the authors to measure simultaneously the elastic response and transport properties of melts under extreme temperatures. Over the wave vector Q range of 1-8 nm-1 the data can be fitted with a generalized hydrodynamic model that incorporates a slow component described by a single relaxation time and an effectively instantaneous fast component. Their study provides estimates of high-frequency sound velocities and viscosities of the Mg-Al-O melts. In contrast to liquid metals, the dispersion of the high-frequency sound mode is found to be linear, and the generalized viscosity to be Q independent. Both experiment and simulation show a weak viscosity maximum around the MgAl4O7 composition.
Longitudinal excitations in Mg-Al-O refractory oxide melts studied by inelastic x-ray scattering
NASA Astrophysics Data System (ADS)
Pozdnyakova, I.; Hennet, L.; Brun, J.-F.; Zanghi, D.; Brassamin, S.; Cristiglio, V.; Price, D. L.; Albergamo, F.; Bytchkov, A.; Jahn, S.; Saboungi, M.-L.
2007-03-01
The dynamic structure factor S(Q,ω) of the refractory oxide melts MgAl2O4 and MgAl4O7 is studied by inelastic x-ray scattering with aerodynamic levitation and laser heating. This technique allows the authors to measure simultaneously the elastic response and transport properties of melts under extreme temperatures. Over the wave vector Q range of 1-8nm-1 the data can be fitted with a generalized hydrodynamic model that incorporates a slow component described by a single relaxation time and an effectively instantaneous fast component. Their study provides estimates of high-frequency sound velocities and viscosities of the Mg-Al-O melts. In contrast to liquid metals, the dispersion of the high-frequency sound mode is found to be linear, and the generalized viscosity to be Q independent. Both experiment and simulation show a weak viscosity maximum around the MgAl4O7 composition.
Study on mathematical model to predict aerated power consumption in a gas-liquid stirred tank
NASA Astrophysics Data System (ADS)
Luan, Deyu; Zhang, Shengfeng; Wei, Xing; Chen, Yiming
The aerated power consumption characteristics in a transparent tank with diameter of 0.3 m and flat bottom stirred by a Rushton impeller were investigated by means of experimental measurement. The test fluid used was tap water as liquid and air as gas. Based on Weibull model, the complete correlation of aerated power with aerated flow number was established through non-linear fit analysis. The effects of aerated rate and impeller speed on aerated power consumption were made an exploration. Results show that the changeable trend of the aerated power consumption is found to be similar under different impeller speeds and impeller diameters, i.e. the aerated power is close to dropping linear at the beginning of gas input, and then the drop tendency decreases as the aerated rate increases, at the end, the aerated power is a constant on the whole as the aerated rate reaches up the loading state. The non-linear fit curve is done using the software Origin based on the experimental data. The fairly high precision of data fit is obtained, which indicates that the mathematical model established can be used to accurately predict the aerated power consumption, comparatively. The proposed research provides a valuable instruction and reference for the design and enlargement of stirred vessel.
The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...
Greer, Dennis H.
2012-01-01
Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.