ERIC Educational Resources Information Center
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Tasel, Serdar F.; Hassanpour, Reza; Mumcuoglu, Erkan U.; Perkins, Guy C.; Martone, Maryann
2014-03-01
Mitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of nonmitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme. Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.
Langbein, W.B.
1955-01-01
A common problem in hydrology is to fit a smooth curve to cyclic or periodic data, either to define the most probable values of the data or to test some principle that one wishes to demonstrate. This study treats of those problems where the length or period of the cycle is know beforehand - as a day, year, or meander length for example. Curve-fitting can be made by free-hand drawing, and where the data are closely aligned this method offers the simplest and most direct course. However, there are many problems where the best fit is far from obvious, and analytical methods may be necessary.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Bounded Population Growth: A Curve Fitting Lesson.
ERIC Educational Resources Information Center
Mathews, John H.
1992-01-01
Presents two mathematical methods for fitting the logistic curve to population data supplied by the U.S. Census Bureau utilizing computer algebra software to carry out the computations and plot graphs. (JKK)
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Modeling and Fitting Exoplanet Transit Light Curves
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, G. T.
2013-01-01
We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the
Multivariate curve-fitting in GAUSS
Bunck, C.M.; Pendleton, G.W.
1988-01-01
Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.
Curve fitting of mixed-mode isopachics
NASA Astrophysics Data System (ADS)
Hebb, R. I.; Dulieu-Barton, J. M.; Worden, K.; Tatum, P.
2009-08-01
Recent work has focused on exploiting the observation that the stress-sum contours (isopachics), obtained from TSA, in the vicinity of the tip take the form of a simple curve - the cardioid. The analysis made use of the cardioid nature of the isopachics by deriving expressions for the SIFs in terms of the cardioid area and the positions of certain tangents to the curve. Both Genetic Algorithms (GAs) and Differential Evolution (DE) have also proved successful for parameter estimation, but some of the curve-fits indicated that the cardioid form was inappropriate for the base model, particularly for mixed-mode cracks. The effect of crack-tip interaction has been explored and shows this has a small effect on the cardioid form. New, higher resolution infra-red detectors have become available since the original data was collected, so the object of the current paper is to use new techniques to extract the cardioid form and use a GA to perform the curve fitting.
D Catenary Curve Fitting for Geometric Calibration
NASA Astrophysics Data System (ADS)
Chan, T.-O.; Lichti, D. D.
2011-09-01
In modern road surveys, hanging power cables are among the most commonly-found geometric features. These cables are catenary curves that are conventionally modelled with three parameters in 2D Cartesian space. With the advent and popularity of the mobile mapping system (MMS), the 3D point clouds of hanging power cables can be captured within a short period of time. These point clouds, similarly to those of planar features, can be used for feature-based self-calibration of the system assembly errors of an MMS. However, to achieve this, a well-defined 3D equation for the catenary curve is needed. This paper proposes three 3D catenary curve models, each having different parameters. The models are examined by least squares fitting of simulated data and real data captured with an MMS. The outcome of the fitting is investigated in terms of the residuals and correlation matrices. Among the proposed models, one of them could estimate the parameters accurately and without any extreme correlation between the variables. This model can also be applied to those transmission lines captured by airborne laser scanning or any other hanging cable-like objects.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Evaluating Latent Growth Curve Models Using Individual Fit Statistics
ERIC Educational Resources Information Center
Coffman, Donna L.; Millsap, Roger E.
2006-01-01
The usefulness of assessing individual fit in latent growth curve models was examined. The study used simulated data based on an unconditional and a conditional latent growth curve model with a linear component and a small quadratic component and a linear model was fit to the data. Then the overall fit of linear and quadratic models to these data…
Simplified curve fits for the transport properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.
1987-01-01
New, improved curve fits for the transport properties of equilibruim air have been developed. The curve fits are for viscosity and Prandtl number as functions of temperature and density, and viscosity and thermal conductivity as functions of internal energy and density. The curve fits were constructed using grabau-type transition functions to model the tranport properties of Peng and Pindroh. The resulting curve fits are sufficiently accurate and self-contained so that they can be readily incorporated into new or existing computational fluid dynamics codes. The range of validity of the new curve fits are temperatures up to 15,000 K densities from 10 to the -5 to 10 amagats (rho/rho sub o).
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm.
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-01-01
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer. PMID:26437414
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-01-01
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer. PMID:26437414
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Curve fitting for RHB Islamic Bank annual net profit
NASA Astrophysics Data System (ADS)
Nadarajan, Dineswary; Noor, Noor Fadiya Mohd
2015-05-01
The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Viscosity Coefficient Curve Fits for Ionized Gas Species Grant Palmer
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
Viscosity coefficient curve fits for neutral gas species are available from many sources. Many do a good job of reproducing experimental and computational chemistry data. The curve fits are usually expressed as a function of temperature only. This is consistent with the governing equations used to derive an expression for the neutral species viscosity coefficient. Ionized species pose a more complicated problem. They are subject to electrostatic as well as intermolecular forces. The electrostatic forces are affected by a shielding phenomenon where electrons shield the electrostatic forces of positively charged ions beyond a certain distance. The viscosity coefficient for an ionized gas species is a function of both temperature and local electron number density. Currently available curve fits for ionized gas species, such as those presented by Gupta/Yos, are a function of temperature only. What they did was to assume an electron number density. The problem is that the electron number density they assumed was unrealistically high. The purpose of this paper is two-fold. First, the proper expression for determining the viscosity coefficient of an ionized species as a function of both temperature and electron number density will be presented. Then curve fit coefficients will be developed using the more realistic assumption of an equilibrium electron number density. The results will be compared against previous curve fits and against highly accurate computational chemistry data.
Fitting age-specific fertility with the Makeham curve.
Luther, N Y
1984-01-01
The Makeham curve has long been recognized for its empirically good fit of adult mortality experience. However, it has never been seriously used in fertility estimation. This paper aims to show that the Makeham curve provides a very good fit of cumulative age-specific fertility over the full range of the fertility experience. Presented here is a simple linearization procedure, easily executed by hand calcualtor, for the estimation of cumulative age-specific fertility per woman (or parity) ar exact age x. The procedure provides a check for the fit of the Makeham curve to cumulative age-specific fertility, locally or globally--that is, the fit to local ratios over any range of ages. The procedure also determines the parameters of optimum fit over any range of ages. To carry out the procedure, one must simply check the linearity of points in each of 2 data plots and determine the Makeham curve from the slopes and intercepts of the fitted straight lines. The mathematical methodology for the procedure is presented and the global goodness of fit studied. Because it is of a local nature, and since it elicits an explicit analytic formula for the fitted Makeham curve, the procedure is conducive to interpolation and extrapolation applications, including the completion of incomplete schedules of age-specific fertility rates at the tails of the reproductive age span. The use of the procedure for extrapolation purposes is illustrated with data from the 1968 Population Growth Survey of Pakistan. It suggests results that, for the most part, are consistent with the thesis of general age exaggeration of reporting women. However, further evidence is needed to be conclusive. PMID:12313262
Extracting gene-gene interactions through curve fitting.
Das, Ranajit; Mitra, Sushmita; Murthy, C A
2012-12-01
This paper presents a simple and novel curve fitting approach for generating simple gene regulatory subnetworks from time series gene expression data. Microarray experiments simultaneously generate massive data sets and help immensely in the large-scale study of gene expression patterns. Initial biclustering reduces the search space in the high-dimensional microarray data. The least-squares error between fitting of gene pairs is minimized to extract a set of gene-gene interactions, involving transcriptional regulation of genes. The higher error values are eliminated to retain only the strong interacting gene pairs in the resultant gene regulatory subnetwork. Next the algorithm is extended to a generalized framework to enhance its capability. The methodology takes care of the higher-order dependencies involving multiple genes co-regulating a single gene, while eliminating the need for user-defined parameters. It has been applied to the time-series Yeast data, and the experimental results biologically validated using standard databases and literature. PMID:22997274
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
How Graphing Calculators Find Curves of Best Fit
ERIC Educational Resources Information Center
Shore, Mark; Shore, JoAnna; Boggs, Stacey
2004-01-01
For over a decade mathematics instructors have been using graphing calculators in courses ranging from developmental mathematics (Beginning and Intermediate Algebra) to Calculus and Statistics. One of the key functions that make them so powerful in the teaching and learning process is their ability to find curves of best fit. Instructors may use…
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained. PMID:14695010
BGFit: management and automated fitting of biological growth curves
2013-01-01
Background Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. Results BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. Conclusions BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity. PMID:24067087
Double-mass curves; with a section fitting curves to cyclic data
Searcy, James K.; Hardison, Clayton H.; Langein, Walter B.
1960-01-01
The double.-mass curve is used to check the consistency of many kinds of hydrologic data by comparing data for a single station with that of a pattern composed of the data from several other stations in the area The double-mass curve can be used to adjust inconsistent precipitation data. The graph of the cumulative data of one variable versus the cumulative data of a related variable is a straight line so long as the relation between the variables is a fixed ratio. Breaks in the double-mass curve of such variables are caused by changes in the relation between the variables. These changes may be due to changes in the method of data collection or to physical changes that affect the relation. Applications of the double-mass curve to precipitation, streamflow, and sediment data, and to precipitation-runoff relations are described. A statistical test for significance of an apparent break in the slope of the double-mass curve is described by an example. Poor correlation between the variables can prevent detection of inconsistencies in a record, but an increase in the length of record tends to offset the effect of poor correlation. The residual-mass curve, which is a modification of the double-mass curve, magnifies imperceptible breaks in the double-mass curve for detailed study. Of the several methods of fitting a smooth curve to cyclic or periodic data, the moving-arc method and the double-integration method deserve greater use in hydrology. Both methods are described in this manual. The moving-arc method has general applicability, and the double integration method is useful in fitting a curve to cycles of sinusoidal form.
Using LISREL to Fit Nonlinear Latent Curve Models
ERIC Educational Resources Information Center
Blozis, Shelley A.; Harring, Jeffrey R.; Mels, Gerhard
2008-01-01
Latent curve models offer a flexible approach to the study of longitudinal data when the form of change in a response is nonlinear. This article considers such models that are conditionally linear with regard to the random coefficients at the 2nd level. This framework allows fixed parameters to enter a model linearly or nonlinearly, and random…
An automated fitting procedure and software for dose-response curves with multiphasic features
Veroli, Giovanni Y. Di; Fornari, Chiara; Goldlust, Ian; Mills, Graham; Koh, Siang Boon; Bramhall, Jo L; Richards, Frances M.; Jodrell, Duncan I.
2015-01-01
In cancer pharmacology (and many other areas), most dose-response curves are satisfactorily described by a classical Hill equation (i.e. 4 parameters logistical). Nevertheless, there are instances where the marked presence of more than one point of inflection, or the presence of combined agonist and antagonist effects, prevents straight-forward modelling of the data via a standard Hill equation. Here we propose a modified model and automated fitting procedure to describe dose-response curves with multiphasic features. The resulting general model enables interpreting each phase of the dose-response as an independent dose-dependent process. We developed an algorithm which automatically generates and ranks dose-response models with varying degrees of multiphasic features. The algorithm was implemented in new freely available Dr Fit software (sourceforge.net/projects/drfit/). We show how our approach is successful in describing dose-response curves with multiphasic features. Additionally, we analysed a large cancer cell viability screen involving 11650 dose-response curves. Based on our algorithm, we found that 28% of cases were better described by a multiphasic model than by the Hill model. We thus provide a robust approach to fit dose-response curves with various degrees of complexity, which, together with the provided software implementation, should enable a wide audience to easily process their own data. PMID:26424192
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Asteroid Phase Curves: Two- And Three-parameter Fits
NASA Astrophysics Data System (ADS)
Oszkiewicz, Dagmara; Muinonen, K.; Bowell, E.; Trilling, D.
2010-10-01
We apply the two-parameter H, G12 and the three-parameter H, G1, G2 magnitude phase functions (Muinonen et al., Icarus, in press) to photometric data from the Minor Planet Center, as modified in observational data files at Lowell Observatory, to derive improved absolute magnitudes (H) and slope parameters for individual asteroids (G12) and for groups of asteroids: e.g., main-belt zones, families (G1, G2). The Lowell Observatory orbital data file comprises about 535,000 records. Most of the photometric data are of low precision (generally rounded to 0.1 mag) and low accuracy (rms magnitude uncertainties of ± 0.5 mag are typical). To offset the low accuracy, the photometric data are very numerous: about 73,000,000 individual, largely independent magnitude estimates exist. Here we present our preliminary estimates of absolute magnitudes using the H, G1, G2 magnitude phase functions and compare them with the values derived using the H, G12 two-parameter phase functions. A java applet version of the fitting procedures will soon be available on the web. In future, we plan to combine the derived absolute magnitudes with thermal data from the "Spitzer Asteroid Catalog". The Spitzer catalog includes observations of several hundred near-Earth asteroids and about 100,000 main-belt asteroids most of them observed at 24 micrometer wavelength. The catalog contains fluxes, derived albedos, and diameters. By far the biggest source of error in the catalog stems from inaccurate visible magnitudes. Improved asteroid absolute magnitudes will also lead to better visible magnitude prediction and hence better calibrated asteroid albedos derived from thermal-models. Further, we plan to explore correlations among taxa, color indices, albedos, and phase curves. A significant data product will be a frequently updated list of asteroid photometric parameters and their uncertainties. Research supported, by Magnus Ehrnrooth Foundation, Spitzer Science Center and Lowell Observatory.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
Code System for Data Plotting and Curve Fitting.
Energy Science and Technology Software Center (ESTSC)
2001-07-23
Version 00 PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semilog, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summarymore » of the information contained in the PLOTnFIT reference report.« less
Item Characteristic Curves: A New Theoretical Approach.
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.; Frary, Robert B.
A new approach to the development of the item characteristic curve (ICC), which expresses the functional relationship between the level of performance on a given task and an independent variable that is relevant to the task, is presented. The approach focuses on knowledge states, decision processes, and other circumstances underlying responses to…
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Fitting photosynthetic carbon dioxide response curves for C(3) leaves.
Sharkey, Thomas D; Bernacchi, Carl J; Farquhar, Graham D; Singsaas, Eric L
2007-09-01
Photosynthetic responses to carbon dioxide concentration can provide data on a number of important parameters related to leaf physiology. Methods for fitting a model to such data are briefly described. The method will fit the following parameters: V(cmax), J, TPU, R(d) and g(m)[maximum carboxylation rate allowed by ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco), rate of photosynthetic electron transport (based on NADPH requirement), triose phosphate use, day respiration and mesophyll conductance, respectively]. The method requires at least five data pairs of net CO(2) assimilation (A) and [CO(2)] in the intercellular airspaces of the leaf (C(i)) and requires users to indicate the presumed limiting factor. The output is (1) calculated CO(2) partial pressure at the sites of carboxylation, C(c), (2) values for the five parameters at the measurement temperature and (3) values adjusted to 25 degrees C to facilitate comparisons. Fitting this model is a way of exploring leaf level photosynthesis. However, interpreting leaf level photosynthesis in terms of underlying biochemistry and biophysics is subject to assumptions that hold to a greater or lesser degree, a major assumption being that all parts of the leaf are behaving in the same way at each instant. PMID:17661745
Lee, Hyun-Wook; Park, Hyoung-Jun; Lee, June-Ho; Song, Minho
2007-04-20
To improve measurement accuracy of spectrally distorted fiber Bragg grating temperature sensors, reflection profiles were curve fitted to Gaussian shapes, of which center positions were transformed into temperature information.By applying the Gaussian curve-fitting algorithm in a tunable bandpass filter demodulation scheme,{approx}0.3 deg. C temperature resolution was obtained with a severely distorted grating sensor, which was much better than that obtained using the highest peak search algorithm. A binary search was also used to retrieve the optimal fitting curves with the least amount of processing time.
Curve fitting of aeroelastic transient response data with exponential functions
NASA Technical Reports Server (NTRS)
Bennett, R. M.; Desmarais, R. N.
1976-01-01
The extraction of frequency, damping, amplitude, and phase information from unforced transient response data is considered. These quantities are obtained from the parameters determined by fitting the digitized time-history data in a least-squares sense with complex exponential functions. The highlights of the method are described, and the results of several test cases are presented. The effects of noise are considered both by using analytical examples with random noise and by estimating the standard deviation of the parameters from maximum-likelihood theory.
A QUMOND galactic N-body code - I. Poisson solver and rotation curve fitting
NASA Astrophysics Data System (ADS)
Angus, G. W.; van der Heyden, K. J.; Famaey, B.; Gentile, G.; McGaugh, S. S.; de Blok, W. J. G.
2012-04-01
Here we present a new particle-mesh galactic N-body code that uses the full multigrid algorithm for solving the modified Poisson equation of the quasi-linear formulation of modified Newtonian dynamics (QUMOND). A novel approach for handling the boundary conditions using a refinement strategy is implemented and the accuracy of the code is compared with analytical solutions of Kuzmin discs. We then employ the code to compute the predicted rotation curves for a sample of five spiral galaxies from the THINGS sample. We generated static N-body realizations of the galaxies according to their stellar and gaseous surface densities and allowed their distances, mass-to-light ratios (M/L values) and both the stellar and gas scale-heights to vary in order to estimate the best-fitting parameters. We found that NGC 3621, NGC 3521 and DDO 154 are well fitted by MOND using expected values of the distance and M/L. NGC 2403 required a moderately larger M/L than expected and NGC 2903 required a substantially larger value. The surprising result was that the scale-height of the dominant baryonic component was well constrained by the rotation curves: the gas scale-height for DDO 154 and the stellar scale-height for the others. In fact, if the suggested stellar scale-height (one-fifth the stellar scale-length) was used in the case of NGC 3621 and NGC 3521 it would not be possible to produce a good fit to the inner rotation curve. For each of the four stellar dominated galaxies, we calculated the vertical velocity dispersions which we found to be, on the whole, quite typical compared with observed stellar vertical velocity dispersions of face-on spirals. We conclude that modelling the gas scale-heights of the gas-rich dwarf spiral galaxies will be vital in order to make precise conclusions about MOND.
Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python
NASA Astrophysics Data System (ADS)
Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew
2016-06-01
Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.
TTF HOM Data Analysis with Curve Fitting Method
Pei, S.; Adolphsen, C.; Li, Z.; Bane, K.; Smith, J.; /SLAC
2009-07-14
To investigate the possibility of using HOM signals induced in SC cavities as beam and cavity diagnostics, narrow band (20 MHz) data was recorded around the strong TE111-6(6{pi}/9-like) dipole modes (1.7 GHz) in the 40 L-band (1.3 GHz) cavities at the DESY TTF facility. The analyses of these data have so far focused on using a Singular Value Decomposition (SVD) technique to correlate the signals with each other and data from conventional BPMs to show the dipole signals provide an alternate means of measuring the beam trajectory. However, these analyses do not extract the modal information (i.e., frequencies and Q's of the nearly degenerate horizontal and vertical modes). In this paper, we described a method to fit the signal frequency spectrum to obtain this information, and then use the resulting mode amplitudes and phases together with conventional BPM data to determine the mode polarizations and relative centers and tilts. Compared with the SVD analysis, this method is more physical, and can also be used to obtain the beam position and trajectory angle.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
"asymptotic Parabola" FITS for Smoothing Generally Asymmetric Light Curves
NASA Astrophysics Data System (ADS)
Andrych, K. D.; Andronov, I. L.; Chinarova, L. L.; Marsakova, V. I.
A computer program is introduced, which allows to determine statistically optimal approximation using the "Asymptotic Parabola" fit, or, in other words, the spline consisting of polynomials of order 1,2,1, or two lines ("asymptotes") connected with a parabola. The function itself and its derivative is continuous. There are 5 parameters: two points, where a line switches to a parabola and vice versa, the slopes of the line and the curvature of the parabola. Extreme cases are either the parabola without lines (i.e.the parabola of width of the whole interval), or lines without a parabola (zero width of the parabola), or "line+parabola" without a second line. Such an approximation is especially effective for pulsating variables, for which the slopes of the ascending and descending branches are generally different, so the maxima and minima have asymmetric shapes. The method was initially introduced by Marsakova and Andronov (1996OAP.....9...127M) and realized as a computer program written in QBasic under DOS. It was used for dozens of variable stars, particularly, for the catalogs of the individual characteristics of pulsations of the Mira (1998OAP....11...79M) and semi-regular (200OAP....13..116C) pulsating variables. For the eclipsing variables with nearly symmetric shapes of the minima, we use a "symmetric" version of the "Asymptotic parabola". Here we introduce a Windows-based program, which does not have DOS limitation for the memory (number of observations) and screen resolution. The program has an user-friendly interface and is illustrated by an application to the test signal and to the pulsating variable AC Her.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
NASA Astrophysics Data System (ADS)
Seki, K.
2007-02-01
The soil hydraulic parameters for analyzing soil water movement can be determined by fitting a soil water retention curve to a certain function, i.e., a soil hydraulic model. For this purpose, the program "SWRC Fit," which performs nonlinear fitting of soil water retention curves to 5 models by Levenberg-Marquardt method, was developed. The five models are the Brooks and Corey model, the van Genuchten model, Kosugi's log-normal pore-size distribution model, Durner's bimodal pore-size distribution model, and a bimodal log-normal pore-size distribution model propose in this study. This program automatically determines all the necessary conditions for the nonlinear fitting, such as the initial estimate of the parameters, and, therefore, users can simply input the soil water retention data to obtain the necessary parameters. The program can be executed directly from a web page at http://purl.org/net/swrc/; a client version of the software written in numeric calculation language GNU Octave is included in the electronic supplement of this paper. The program was used for determining the soil hydraulic parameters of 420 soils in UNSODA database. After comparing the root mean square error of the unimodal models, the van Genuchten and Kosugi's models were better than the Brooks and Corey model. The bimodal log-normal pore-size distribution model had similar fitting performance to Durner's bimodal pore-size distribution model.
A Predictive Study of Dirichlet Process Mixture Models for Curve Fitting
WADE, SARA; WALKER, STEPHEN G.; PETRONE, SONIA
2013-01-01
This paper examines the use of Dirichlet process (DP) mixtures for curve fitting. An important modelling aspect in this setting is the choice between constant or covariate-dependent weights. By examining the problem of curve fitting from a predictive perspective, we show the advantages of using covariate-dependent weights. These advantages are a result of the incorporation of covariate proximity in the latent partition. However, closer examination of the partition yields further complications, which arise from the vast number of total partitions. To overcome this, we propose to modify the probability law of the random partition to strictly enforce the notion of covariate proximity, while still maintaining certain properties of the DP. This allows the distribution of the partition to depend on the covariate in a simple manner and greatly reduces the total number of possible partitions, resulting in improved curve fitting and faster computations. Numerical illustrations are presented. PMID:25395718
Ionization constants by curve fitting: application to the determination of partition coefficients.
Clarke, F H
1984-02-01
A multiparametric curve-fitting technique for pKa calculation has been adapted for use with a programmable calculator or microcomputer. This method provides for the convenient and accurate determination of the ionization constant in aqueous solution and of the apparent ionization constant in the presence of octanol. From these parameters, partition coefficients and apparent partition coefficients are easily calculated and agree with data reported using the shaker technique or HPLC. The curve-fitting method has been applied to the differential titration technique in which the solvent curve is subtracted from the solution curve before calculations are begun. This method has been applied to the potentiometric titration of aqueous solutions of the salts of bases with a very low solubility in water. PMID:6707889
A two-component model for fitting light curves of core-collapse supernovae
NASA Astrophysics Data System (ADS)
Nagy, A. P.; Vinkó, J.
2016-05-01
We present an improved version of a light curve model that is able to estimate the physical properties of different types of core-collapse supernovae that have double-peaked light curves and do so in a quick and efficient way. The model is based on a two-component configuration consisting of a dense inner region and an extended low-mass envelope. Using this configuration, we estimate the initial parameters of the progenitor by fitting the shape of the quasi-bolometric light curves of 10 SNe, including Type IIP and IIb events, with model light curves. In each case we compare the fitting results with available hydrodynamic calculations and also match the derived expansion velocities with the observed ones. Furthermore, we compare our calculations with hydrodynamic models derived by the SNEC code and examine the uncertainties of the estimated physical parameters caused by the assumption of constant opacity and the inaccurate knowledge of the moment of explosion.
[A fitting power curve equation for the accumulative inhaling volume in Chinese under 19 years old].
Shang, Q; Zhou, H
2000-03-30
A fitting power curve equation based on the breath frequency and body weight for the accumulative inhaling volume in Chinese under 19 years old was established. The equation is Y = 754.37 + 258.34 X1.9038, and the fitting is good (R2 = 0.9974). It is useful for estimating the degree of exposure to air pollutants of people in their young period. PMID:12725097
An Algorithm for Obtaining Reliable Priors for Constrained-Curve Fits
Terrence Draper; Shao-Jing Dong; Ivan Horvath; Frank Lee; Nilmani Mathur; Jianbo Zhang
2004-03-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-chi-square fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. We illustrate the efficacy of the method with data from overlap fermions, on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx} 180 MeV.
A flight evaluation of curved landing approaches
NASA Technical Reports Server (NTRS)
Gee, S. W.; Barber, M. R.; Mcmurtry, T. C.
1972-01-01
The development of STOL technology for application to operational short-haul aircraft is accompanied by the requirement for solving problems in many areas. One of the most obvious problems is STOL aircraft operations in the terminal area. The increased number of terminal operations needed for an economically viable STOL system as compared with the current CTOL system and the incompatibility of STOL and CTOL aircraft speeds are positive indicators of an imminent problem. The high cost of aircraft operations, noise pollution, and poor short-haul service are areas that need improvement. A potential solution to some of the operational problems lies in the capability of making curved landing approaches under both visual and instrument flight conditions.
Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis
ERIC Educational Resources Information Center
Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.
2010-01-01
The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…
On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally
ERIC Educational Resources Information Center
Blozis, Shelley A.
2007-01-01
This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…
Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.
Mohammed, Goran Abdulrahman; Hou, Ming
2016-03-01
The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies. PMID:26276984
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo, Urbano
2000-01-01
Describes a program for computing different person-fit measures under different parametric item response models for binary items. The indexes can be computed for the Rasch model and the two- and three-parameter logistic models. The program can plot person response curves to allow the researchers to investigate the nonfitting response behavior of…
Baushke, Samuel W; Stedtfeld, Robert D; Tourlousse, Dieter M; Ahmad, Farhan; Wick, Lukas M; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2012-07-01
Non-equilibrium dissociation curves (NEDCs) have the potential to identify non-specific hybridizations on high throughput, diagnostic microarrays. We report a simple method for the identification of non-specific signals by using a new parameter that does not rely on comparison of perfect match and mismatch dissociations. The parameter is the ratio of specific dissociation temperature (T(d-w)) to theoretical melting temperature (T(m)) and can be obtained by automated fitting of a four-parameter, sigmoid, empirical equation to the thousands of curves generated in a typical experiment. The curves fit perfect match NEDCs from an initial experiment with an R(2) of 0.998±0.006 and root mean square of 108±91 fluorescent units. Receiver operating characteristic curve analysis showed low temperature hybridization signals (20-48°C) to be as effective as area under the curve as primary data filters. Evaluation of three datasets that target 16S rRNA and functional genes with varying degrees of target sequence similarity showed that filtering out hybridizations with T(d-w)/T(m)<0.78 greatly reduced false positive results. In conclusion, T(d-w)/T(m) successfully screened many non-specific hybridizations that could not be identified using single temperature signal intensities alone, while the empirical modeling allowed a simplified approach to the high throughput analysis of thousands of NEDCs. PMID:22537822
Conversion of infrared grey-level image into temperature field by polynomial curve fitting
NASA Astrophysics Data System (ADS)
Chen, Terry Y.; Kuo, Ming-Hsuan
2015-02-01
A simple method to convert the infrared gray-level image into temperature field is developed by using least squares polynomial curve fitting. In this method, the correspondence between the infrared gray-level image and the associated temperature field for various emissivity values and temperature range is analyzed first. Then a second-order polynomial can be applied to fit the correspondence between the gray-level image and the associated temperature field as a function of emissivity. For multiple conversions of temperature ranges, the constants of the fitted polynomial in multiple ranges can be further fitted as a function of emissivity and temperature range. Test of the method on a cup of hot water was done. An average error less than 1% was achieved between the proposed method and the commercial ones.
STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.
Thornhill, D P; Schwerzel, E
1985-05-01
A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence. PMID:3839530
An approach to fast fits of the unintegrated gluon density
Knutsson, Albert; Bacchetta, Alessandro; Kutak, Krzyzstof; Jung, Hannes
2009-01-01
An approach to fast fits of the unintegrated gluon density has been developed and used to determine the unintegrated gluon density by fits to deep inelastic scatting di-jet data from HERA. The fitting method is based on the determination of the parameter dependence by help of interpolating between grid points in the parameter-observable space before the actual fit is performed.
Evaluation of fatigue-crack growth rates by polynomial curve fitting. [Ti alloy plate
NASA Technical Reports Server (NTRS)
Davies, K. B.; Feddersen, C. E.
1973-01-01
Fundamental characterization of the constant-amplitude fatigue crack propagation is achieved by an analysis of the rate of change of crack length with change in number of applied loading cycles, defining the rate values such that they are consistent with the basic assumption of smoothness and continuity in the fatigue crack growth process. The technique used to satisfy the analytical conditions and minimize the effects of local material anomalies and experimental errors is that of fitting a smooth curve to the entire set of basic data by least square regression. This yields a well-behaved function relating the number of cycles to the crack length. By taking the first derivative of the function, the crack growth rate is obtained for each point. The class of curve fitting functions used in the analysis is the polynomial of degree n.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers
NASA Astrophysics Data System (ADS)
Tananaev, N. I.
2015-03-01
Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-03-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Methodology for fast curve fitting to modulated Voigt dispersion lineshape functions
NASA Astrophysics Data System (ADS)
Westberg, Jonas; Wang, Junyang; Axner, Ove
2014-01-01
Faraday rotation spectroscopy (FAMOS) as well as other modulated techniques that rely on dispersion produce lock-in signals that are proportional to various Fourier coefficients of modulated dispersion lineshape functions of the molecular transition targeted. In order to enable real-time curve fitting to such signals a fast methodology for calculating the Fourier coefficients of modulated lineshape functions is needed. Although there exist an analytical expression for such Fourier coefficients of modulated Lorentzian absorption and dispersion lineshape functions, there is no corresponding expression for a modulated Voigt dispersion function. The conventional computational route of such Fourier coefficients has therefore so far either consisted of using various approximations to the modulated Voigt lineshape function or solving time-consuming integrals, which has precluded accurate real-time curve fitting. Here we present a new methodology to calculate Fourier coefficients of modulated Voigt dispersion lineshape functions that is significantly faster (several orders of magnitude) and more accurate than previous approximative calculation procedures, which allows for real-time curve fitting to FAMOS signals also in the Voigt regime.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. PMID:24530825
Reducing respirator fit test errors: a multi-donning approach.
Campbell, D L; Coffey, C C; Jensen, P A; Zhuang, Z
2005-08-01
As a continuation of recent studies to assess the accuracy of existing fit test methods, a multi-donning approach to fit testing is presented. As an example of that approach, a multi-donning quantitative fit test for filtering-facepiece respirators is presented and analyzed by comparing its error rates with those of the single-donning approach of current fit test methods. That analysis indicates the multi-donning fit test has the potential to reduce both the alpha error and the beta error to half that of single-donning fit tests. The alpha error is the error of failing a respirator that should pass; the beta error is the error of passing a respirator that should fail. Lowering fit test error rates for filtering-facepiece respirators is important because fit testing is an essential means of helping assure that an individual has selected an adequately fitting respirator. To reduce the alpha and beta error inherent in current fit test methods, the proposed fit test for filtering-facepiece respirators incorporates five donnings of the facepiece, unlike the single donning of existing fit test methods. The analysis presented here indicates that the multiple-donning approach reduces the element of chance in the fit test result and thereby increases the consistency and accuracy of the fit tests. The time to conduct the multi-donning test can approximate the time for current, single-donning tests by shortening the time the respirator is worn after each donning to about 10 sec. And, unlike current fit tests for filtering-facepieces that measure only faceseal leakage, the example multiple-donning fit test considered here is based on a measurement of total leakage (faceseal plus filter). Utilizing total respirator leakage can result in simpler quantitative fit test instrumentation and a fit test that is more relevant to the workplace. Further trials with human subjects are recommended in order to validate the proposed multi-donning approach. PMID:16080261
A Healthy Approach to Fitness Center Security.
ERIC Educational Resources Information Center
Sturgeon, Julie
2000-01-01
Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
Measurement of focused ultrasonic fields based on colour edge detection and curve fitting
NASA Astrophysics Data System (ADS)
Zhu, H.; Chang, S.; Yang, P.; He, L.
2016-03-01
This paper utilizes firstly both a scanning device and an optic fiber hydrophone to establish a measurement system, and then proposes the parameter measurement of the focused transducer based on edge detection of the visualized acoustic data and curve fitting. The measurement system consists of a water tank with wedge absorber, stepper motors driver, system controller, a focused transducer, an optic fiber hydrophone and data processing software. On the basis of the visualized processing for the original scanned data, the -3 dB beam width of the focused transducer is calculated using the edge detection of the acoustic visualized image and circle fitting method by minimizing algebraic distance. Experiments on the visualized ultrasound data are implemented to verify the feasibility of the proposed method. The data obtained from the scanning device are utilized to reconstruct acoustic fields, and it is found that the -3 dB beam width of the focused transducer can be predicted accurately.
[Aging Process of Puer Black Tea Studied by FTIR Spectroscopy Combined with Curve-Fitting Analysis].
Li, Dong-yu; Shi, You-ming; Yi, Shi Lai
2015-07-01
For better determination of the chemical components in the Puer black tea, Fourier transform infrared spectroscopy was used for obtaining vibrational spectra of Puer black tea at different aging time. Fourier transform infrared (FTIR) spectra indicated that the chemical components had change in Puer black tea at different aging time. The leaf of Puer black tea was a complex system, its Fourier transform infrared spectrum showed a total overlap of each absorption spectrum of various components. Each band represented an overall overlap of some characteristic absorption peaks of functional groups in the Puer black tea. In order to explore the change of characteristic absorption peaks of functional groups with aging time, the prediction positions and the number of second peaks in the range of 1900-900 cm(-1) were determined by Fourier self-deconvolution at first, and later the curve fitting analysis was performed in this overlap band. At different aging time of Puer black tea, the wave number of second peaks of amide II, tea polyphenol, pectin and polysaccharides at overlap band were assigned by curve fitting analysis. The second peak at 1520 cm(-1) was characteristic absorption band of amide II, the second peaks of tea polyphenol and pectin appeared at 1278 and 1103 cm(-1) respectively. Two second peaks at 1063 and 1037 cm(-1), corresponds mainly to glucomannan and arabinan. The relative area of these second peaks could be indicated the content of protein, tea polyphenol, pectin and polysaccharides in the Puer black tea. The results of curve fitting analysis showed that the relative area of amide II was increasing first and then decreasing, it indicated the change of protein in Puer black tea. At the same time, the content of tea polyphenol and pectin were decreased with the increase of aging time, but the glucomannan and arabinan were increased in reverse. It explained that the bitter taste was become weak and a sweet taste appeared in the tea with the increase of
Curve fitting toxicity test data: Which comes first, the dose response or the model?
Gully, J.; Baird, R.; Bottomley, J.
1995-12-31
The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.
Calculations and curve fits of thermodynamic and transport properties for equilibrium air to 30000 K
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Lee, Kam-Pui; Thompson, Richard A.; Yos, Jerrold M.
1991-01-01
A self-consistent set of equilibrium air values were computed for enthalpy, total specific heat at constant pressure, compressibility factor, viscosity, total thermal conductivity, and total Prandtl number from 500 to 30,000 K over a range of 10(exp -4) atm to 10(exp 2) atm. The mixture values are calculated from the transport and thermodynamic properties of the individual species provided in a recent study by the authors. The concentrations of the individual species, required in the mixture relations, are obtained from a free energy minimization calculation procedure. Present calculations are based on an 11-species air model. For pressures less than 10(exp -2) atm and temperatures of about 15,000 K and greater, the concentrations of N(++) and O(++) become important, and consequently, they are included in the calculations determining the various properties. The computed properties are curve fitted as a function of temperature at a constant value of pressure. These curve fits reproduce the computed values within 5 percent for the entire temperature range considered here at specific pressures and provide an efficient means for computing the flowfield properties of equilibrium air, provided the elemental composition remains constant at 0.24 for oxygen and 0.76 for nitrogen by mass.
Multiperiodicity, modulations and flip-flops in variable star light curves. I. Carrier fit method
NASA Astrophysics Data System (ADS)
Pelt, J.; Olspert, N.; Mantere, M. J.; Tuominen, I.
2011-11-01
Context. The light curves of variable stars are commonly described using simple trigonometric models, that make use of the assumption that the model parameters are constant in time. This assumption, however, is often violated, and consequently, time series models with components that vary slowly in time are of great interest. Aims: In this paper we introduce a class of data analysis and visualization methods which can be applied in many different contexts of variable star research, for example spotted stars, variables showing the Blazhko effect, and the spin-down of rapid rotators. The methods proposed are of explorative type, and can be of significant aid when performing a more thorough data analysis and interpretation with a more conventional method. Methods: Our methods are based on a straightforward decomposition of the input time series into a fast "clocking" periodicity and smooth modulating curves. The fast frequency, referred to as the carrier frequency, can be obtained from earlier observations (for instance in the case of photometric data the period can be obtained from independently measured radial velocities), postulated using some simple physical principles (Keplerian rotation laws in accretion disks), or estimated from the data as a certain mean frequency. The smooth modulating curves are described by trigonometric polynomials or splines. The data approximation procedures are based on standard computational packages implementing simple or constrained least-squares fit -type algorithms. Results: Using both artificially generated data sets and observed data, we demonstrate the utility of the proposed methods. Our interest is mainly focused on cases where multiperiodicity, trends or abrupt changes take place in the variable star light curves. Conclusions: The presented examples show that the proposed methods significantly enrich the traditional toolbox for variable star researchers. Applications of the methods to solve problems of astrophysical interest
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches.
Winkler, Alexandra; Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user's own voice as too loud, "boomy," or "hollow." This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user’s own voice as too loud, “boomy,” or “hollow.” This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Chatzopoulos, E.; Wheeler, J. Craig; Vinko, J.; Horvath, Z. L.; Nagy, A.
2013-08-10
We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including {sup 56}Ni and {sup 56}Co radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich superluminous supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx, and SN 2010kd, as well as to the interacting SN 2008iy and PTF 09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to the LCs of most of the events we examine. We discuss the possibility that collision of supernova ejecta with hydrogen-deficient CSM accounts for some of the hydrogen-deficient SLSNe (SLSN-I) and may be a plausible explanation for the explosion mechanism of SN 2007bi, the pair-instability supernova candidate. We characterize and discuss issues of parameter degeneracy.
New Horizons approach photometry of Pluto and Charon: light curves and Solar phase curves
NASA Astrophysics Data System (ADS)
Zangari, A. M.; Buie, M. W.; Buratti, B. J.; Verbiscer, A.; Howett, C.; Weaver, H. A., Jr.; Olkin, C.; Ennico Smith, K.; Young, L. A.; Stern, S. A.
2015-12-01
While the most captivating images of Pluto and Charon were shot by NASA's New Horizons probe on July 14, 2015, the spacecraft also imaged Pluto with its LOng Range Reconnaissance Imager ("LORRI") during its Annual Checkouts and Approach Phases, with campaigns in July 2013, July 2014, January 2015, March 2015, April 2015, May 2015 and June 2015. All but the first campaign provided full coverage of Pluto's 6.4 day rotation. Even though many of these images were taken when surface features on Pluto and Charon were unresolved, these data provide a unique opportunity to study Pluto over a timescale of several months. Earth-based data from an entire apparition must be combined to create a single light curve, as Pluto is never otherwise continuously available for observing due to daylight, weather and scheduling. From the spacecraft, Pluto's sub-observer latitude remained constant to within 0.05 degrees of 43.15 degrees, comparable to a week's worth of change as seen from Earth near opposition. During the July 2013 to June 2015 period, Pluto's solar phase curve increased from 11 degrees to 15 degrees, a small range, but large compared to Earth's 2 degree limit. The slope of the solar phase curve hints at properties such as surface roughness. Using PSF photometry that takes into account the ever-increasing sizes of Pluto and Charon as seen from New Horizons, as well as surface features discovered at closest approach, we present rotational light curves and solar phase curves of Pluto and Charon. We will connect these observations to previous measurements of the system from Earth.
Neural network approach for modification and fitting of digitized data in reverse engineering.
Ju, Hua; Wang, Wen; Xie, Jin; Chen, Zi-chen
2004-01-01
Reverse engineering in the manufacturing field is a process in which the digitized data are obtained from an existing object model or a part of it, and then the CAD model is reconstructed. This paper presents an RBF neural network approach to modify and fit the digitized data. The centers for the RBF are selected by using the orthogonal least squares learning algorithm. A mathematically known surface is used for generating a number of samples for training the networks. The trained networks then generated a number of new points which were compared with the calculating points from the equations. Moreover, a series of practice digitizing curves are used to test the approach. The results showed that this approach is effective in modifying and fitting digitized data and generating data points to reconstruct the surface model. PMID:14663856
ERIC Educational Resources Information Center
Winsberg, Suzanne; And Others
In most item response theory models a particular mathematical form is assumed for all item characteristic curves, e.g., a logistic function. It could be desirable, however, to estimate the shape of the item characteristic curves without prior restrictive assumptions about its mathematical form. We have developed a practical method of estimating…
Havel, J; Meloun, M
1986-05-01
A chemical model (i.e., the number of complexes, their stoichiometry and stability constants with molar absorptivities) in solution equilibria may be established by (i) the trial-and-error method in which stability constants are estimated for an assumed set of complexes in the mixture and a fitness test is used to resolve a choice of plausible models to find the true one; (ii) the simultaneous estimation of the stoichiometry and stability constants for species divided into "certain" species for which the parameters beta(pqr), (p, q, r) are known and held constant, and "uncertain" species with unknown parameters which are determined by regression analysis. The interdependence of stability constants and particular sets of stoichiometric indices requires that the computational strategy should be chosen carefully for each particular case. The benefits and limitations of both approaches are compared by means of examples of potentiometric titration data analysis by the POLET(84) program and of spectrophotometric data analysis by the SQUAD(84) program. A strategy for efficient computation is suggested. PMID:18964117
A comparison of approaches in fitting continuum SEDs
NASA Astrophysics Data System (ADS)
Liu, Yao; Madlener, David; Wolf, Sebastian; Wang, Hong-Chi
2013-04-01
We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Shaw, Jessica; Janulis, Patrick
2016-10-01
Recently, there has been a call for more advanced analytic techniques in violence against women research, particularly in community interventions that use longitudinal designs. The current study re-evaluates experimental evaluation data from a sexual violence bystander intervention program. Using an exploratory latent growth curve approach, we were able to model the longitudinal growth trajectories of individual participants over the course of the entire study. Although the results largely confirm the original evaluation findings, the latent growth curve approach better fits the demands of "messy" data (e.g., missing data, varying number of time points per participant, and unequal time spacing within and between participants) that are frequently obtained during a community-based intervention. The benefits of modern statistical techniques to practitioners and researchers in the field of sexual violence prevention, and violence against women more generally, are further discussed. PMID:25888503
NASA Astrophysics Data System (ADS)
Göran Blume, Niels; Ebert, Volker; Dreizler, Andreas; Wagner, Steven
2016-01-01
In this work, a novel broadband fitting approach for quantitative in-flame measurements using supercontinuum broadband laser absorption spectroscopy (SCLAS) is presented. The application and verification of this approach in an atmospheric, laminar, non-premixed CH4/air flame (Wolfhard-Parker burner, WHP) is discussed. The developed fitting scheme allows for an automatic recognition and fitting of a B-spline curve reference intensity for SCLAS broadband measurements while automatically removing the influence of absorption peaks. This approach improves the fitting residual locally (in between absorption lines) and globally by 23% and 13% respectively, while improving the in-flame SNR by a factor of 2. Additionally, the approach inherently improves the time-wavelength-correlation based on recorded in-flame measurements itself in combination with a theoretical spectrum of the analyte. These improvements have allowed for the recording of complete spatially resolved methane concentration profiles in the WHP burner. Comparison of the measured absolute mole fraction profile for methane with previously measured reference data shows excellent agreement in position, shape and absolute values. These improvements are a prerequisite for the application of SCLAS in high-pressure combustion systems.
Perturbation approach to dispersion curves calculation for nonlinear Lamb waves
NASA Astrophysics Data System (ADS)
Packo, Pawel; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.
2015-05-01
Analysis of elastic wave propagation in nonlinear media has gained recent research attention due to the recognition of their amplitude-dependent behavior. This creates opportunities for increased accuracy of damage detection and localization, development of new structural monitoring strategies, and design of new structures with desirable acoustic behavior (e.g., amplitude-dependent frequency bandgaps, wave beaming, and filtering). This differs from more traditional nonlinear analysis approaches which target the prediction of higher harmonic growth. Of particular interest in this work is the analysis of amplitude-dependent shifts in Lamb wave dispersion curves. Typically, dispersion curves are calculated for nominally linear material parameters and geometrical features of a waveguide, even when the constitutive law is nonlinear. Instead, this work employs a Lindstedt - Poincare perturbation approach to calculate amplitude-dependent dispersion curves, and shifts thereof, for nonlinearly-elastic plates. As a result, a set of first order corrections to frequency (or equivalently wavenumber) are calculated. These corrections yield significant amplitude dependence in the spectral characteristics of the calculated waves, especially for high frequency waves, which differs fundamentally from linear analyses. Numerical simulations confirm the analytical shifts predicted. Recognition of this amplitude-dependence in Lamb wave dispersion may suggest, among other things, that the analysis of guided wave propagation phenomena within a fully nonlinear framework needs to revisit mode-mode energy flux and higher harmonics generation conditions.
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Mugge, P. H.
1974-01-01
Simplified curve fits for the thermodynamic properties of equilibrium air were devised for use in either the time-dependent or shock-capturing computational methods. For the time-dependent method, curve fits were developed for p = p(e, rho), a = a(e, rho), and T = T(e, rho). For the shock-capturing method, curve fits were developed for h = h(p, rho) and T = T(p, rho). The ranges of validity for these curves fits were for temperatures up to 25,000 K and densities from 10 to the minus 7th power to 10 to the 3d power amagats. These approximate curve fits are considered particularly useful when employed on advanced computers such as the Burroughs ILLIAC 4 or the CDC STAR.
Prediction of an Epidemic Curve: A Supervised Classification Approach
Nsoesie, Elaine O.; Beckman, Richard; Marathe, Madhav; Lewis, Bryan
2012-01-01
Classification methods are widely used for identifying underlying groupings within datasets and predicting the class for new data objects given a trained classifier. This study introduces a project aimed at using a combination of simulations and classification techniques to predict epidemic curves and infer underlying disease parameters for an ongoing outbreak. Six supervised classification methods (random forest, support vector machines, nearest neighbor with three decision rules, linear and flexible discriminant analysis) were used in identifying partial epidemic curves from six agent-based stochastic simulations of influenza epidemics. The accuracy of the methods was compared using a performance metric based on the McNemar test. The findings showed that: (1) assumptions made by the methods regarding the structure of an epidemic curve influences their performance i.e. methods with fewer assumptions perform best, (2) the performance of most methods is consistent across different individual-based networks for Seattle, Los Angeles and New York and (3) combining classifiers using a weighting approach does not guarantee better prediction. PMID:22997545
Reliability of temperature determination from curve-fitting in multi-wavelength pyrometery
Ni, P. A.; More, R. M.; Bieniosek, F. M.
2013-08-04
Abstract This paper examines the reliability of a widely used method for temperature determination by multi-wavelength pyrometry. In recent WDM experiments with ion-beam heated metal foils, we found that the statistical quality of the fit to the measured data is not necessarily a measure of the accuracy of the inferred temperature. We found a specific example where a second-best fit leads to a more realistic temperature value. The physics issue is the wavelength-dependent emissivity of the hot surface. We discuss improvements of the multi-frequency pyrometry technique, which will give a more reliable determination of the temperature from emission data.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sager, D.
1973-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.
Adaptive virulence evolution: the good old fitness-based approach.
Alizon, Samuel; Michalakis, Yannis
2015-05-01
Infectious diseases could be expected to evolve towards complete avirulence to their hosts if given enough time. However, this is not the case. Often, virulence is maintained because it is linked to adaptive advantages to the parasite, a situation that is often associated with the hypothesis known as the transmission-virulence trade-off hypothesis. Here, we argue that this hypothesis has three limitations, which are related to how virulence is defined, the possibility of multiple trade-offs, and the difficulty of testing the hypothesis empirically. By adopting a fitness-based approach, where the relation between virulence and the fitness of the parasite throughout its life cycle is directly assessed, it is possible to address these limitations and to determine directly whether virulence is adaptive. PMID:25837917
Ferreira, Abílio G T; Henrique, Douglas S; Vieira, Ricardo A M; Maeda, Emilyn M; Valotto, Altair A
2015-03-01
The objective of this study was to evaluate four mathematical models with regards to their fit to lactation curves of Holstein cows from herds raised in the southwestern region of the state of Parana, Brazil. Initially, 42,281 milk production records from 2005 to 2011 were obtained from "Associação Paranaense de Criadores de Bovinos da Raça Holandesa (APCBRH)". Data lacking dates of drying and total milk production at 305 days of lactation were excluded, resulting in a remaining 15,142 records corresponding to 2,441 Holstein cows. Data were sorted according to the parity order (ranging from one to six), and within each parity order the animals were divided into quartiles (Q25%, Q50%, Q75% and Q100%) corresponding to 305-day lactation yield. Within each parity order, for each quartile, four mathematical models were adjusted, two of which were predominantly empirical (Brody and Wood) whereas the other two presented more mechanistic characteristics (models Dijkstra and Pollott). The quality of fit was evaluated by the corrected Akaike information criterion. The Wood model showed the best fit in almost all evaluated situations and, therefore, may be considered as the most suitable model to describe, at least empirically, the lactation curves of Holstein cows raised in Southwestern Parana. PMID:25806994
Learning Curves: Making Quality Online Health Information Available at a Fitness Center
Dobbins, Montie T.; Tarver, Talicia; Adams, Mararia; Jones, Dixie A.
2012-01-01
Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center – Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges. PMID:22545020
A covariance fitting approach for correlated acoustic source mapping.
Yardibi, Tarik; Li, Jian; Stoica, Petre; Zawodny, Nikolas S; Cattafesta, Louis N
2010-05-01
Microphone arrays are commonly used for noise source localization and power estimation in aeroacoustic measurements. The delay-and-sum (DAS) beamformer, which is the most widely used beamforming algorithm in practice, suffers from low resolution and high sidelobe level problems. Therefore, deconvolution approaches, such as the deconvolution approach for the mapping of acoustic sources (DAMAS), are often used for extracting the actual source powers from the contaminated DAS results. However, most deconvolution approaches assume that the sources are uncorrelated. Although deconvolution algorithms that can deal with correlated sources, such as DAMAS for correlated sources, do exist, these algorithms are computationally impractical even for small scanning grid sizes. This paper presents a covariance fitting approach for the mapping of acoustic correlated sources (MACS), which can work with uncorrelated, partially correlated or even coherent sources with a reasonably low computational complexity. MACS minimizes a quadratic cost function in a cyclic manner by making use of convex optimization and sparsity, and is guaranteed to converge at least locally. Simulations and experimental data acquired at the University of Florida Aeroacoustic Flow Facility with a 63-element logarithmic spiral microphone array in the absence of flow are used to demonstrate the performance of MACS. PMID:21117743
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Traditional African Dance: An Excellent Approach to Fitness and Health.
ERIC Educational Resources Information Center
Thompson, Iola
This paper argues that traditional African dance can develop fitness and health particularly for those interested in both health and African culture. A discussion of fitness concludes that this quality enables the body to perform physical activities with greater efficiency and that all the qualities commonly found in notions of fitness are found…
U-Shaped Curves in Development: A PDP Approach
ERIC Educational Resources Information Center
Rogers, Timothy T.; Rakison, David H.; McClelland, James L.
2004-01-01
As the articles in this issue attest, U-shaped curves in development have stimulated a wide spectrum of research across disparate task domains and age groups and have provoked a variety of ideas about their origins and theoretical significance. In the authors' view, the ubiquity of the general pattern suggests that U-shaped curves can arise from…
A Comprehensive Approach for Assessing Person Fit with Test-Retest Data
ERIC Educational Resources Information Center
Ferrando, Pere J.
2014-01-01
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
Curved spacetimes and curved graphene: A status report of the Weyl symmetry approach
NASA Astrophysics Data System (ADS)
Iorio, Alfredo
2015-02-01
This is a status report about the ongoing work on the realization of quantum field theory on curved graphene spacetimes that uses Weyl symmetry. The programme is actively pursued from many different perspectives. Here we point to what has been done, and to what needs to be done.
Havel, J; Meloun, M
1986-06-01
The FORTRAN computer program POLET(84) analyses a set of normalized potentiometric titration curves to find a chemical model, i.e., the number of species present and their stoichiometry, and to determine the corresponding stability constants log beta(pqrs) and unknown stoichiometric indices p, q, r, and s of up to quaternary M(p)L(q)Y(r)H(s) complexes. The program belongs to the ABLET family, based on the LETAG subroutine, and can use an algorithmic and/or heuristic minimization strategy, or a beneficial combination of both. The data, a set of potentiometric titration curves plotted as volume and potential, are converted into normalized variables (formation function, pH) and then a computer-assisted search for a chemical model by POLET(84) is applied. The procedure for efficient application of POLET(84) in an equilibrium analysis is described and the program is validated by use of literature and simulated data. The reliability of the chemical model and its parameters is established by the degree-of-fit achieved, and the closeness of the stoichiometric indices to integral values. PMID:18964133
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
An approach for ensuring accuracy in pediatric hearing instrument fitting.
Seewald, Richard C; Scollie, Susan D
2003-01-01
Hearing instrument fitting with infants and young children differs in several important ways relative to the fitting process with adults. In developing the Desired Sensation Level method, we have attempted to account for those factors that are uniquely associated with pediatric hearing instrument fitting. Within this article we describe how the external ear acoustics of infants and young children have been systematically accounted for in developing the Desired Sensation Level method. Specific evidence-based procedures that can be applied with infants and young children for the purposes of audiometric assessment, electroacoustic selection, and verification of hearing instrument performance are described. PMID:15004646
MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.
ERIC Educational Resources Information Center
Arabie, Phipps
1980-01-01
A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)
Fitting of m*/m with Divergence Curve for He3 Fluid Monolayer using Hole-driven Mott Transition
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tak
2012-02-01
The electron-electron interaction for strongly correlated systems plays an important role in formation of an energy gap in solid. The breakdown of the energy gap is called the Mott metal-insulator transition (MIT) which is different from the Peierls MIT induced by breakdown of electron-phonon interaction generated by change of a periodic lattice. It has been known that the correlated systems are inhomogeneous. In particular, He3 fluid monolayer [1] and La1-xSrxTiO3 [2] are representative strongly correlated systems. Their doping dependence of the effective mass of carrier in metal, m*/m, indicating the magnitude of correlation (Coulomb interaction) between electrons has a divergence behavior. However, the fitting remains unfitted to be explained by a Mott-transition theory with divergence. In the case of He3 regarded as the Fermi system with one positive charge (2 electrons + 3 protons), the interaction between He3 atoms is regarded as the correlation in strongly correlated system. In this presentation, we introduce a Hole-driven MIT with a divergence near the Mott transition [3] and fit the m*/m curve in He3 [1] and La1-xSrxTiO3 systems with the Hole-driven MIT with m*/m=1/(1-ρ^4) where ρ is band filling. Moreover, it is shown that the physical meaning of the effective mass with the divergence is percolation in which m*/m increases with increasing doping concentration, and that the magnitude of m*/m is constant.[4pt] [1] Phys. Rev. Lett. 90, 115301 (2003).[0pt] [2] Phys. Rev. Lett. 70, 2126 (1993).[0pt] [3] Physica C 341-348, 259 (2000); Physica C 460-462, 1076 (2007).
Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.
ERIC Educational Resources Information Center
Buchanan, Trey
Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…
NASA Astrophysics Data System (ADS)
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
2013-01-01
Background This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. Findings The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Conclusions Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed. PMID:23383684
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Sorrell, Steve; Speirs, Jamie
2014-01-13
There is growing concern about the depletion of hydrocarbon resources and the risk of near-term peaks in production. These concerns hinge upon contested estimates of the recoverable resources of different regions and the associated forecasts of regional production. Beginning with Hubbert, an influential group of analysts have used growth curves both to estimate recoverable resources and to forecast future production. Despite widespread use, these 'curve-fitting' techniques remain a focus of misunderstanding and dispute. The aim of this paper is to classify and explain these techniques and to identify both their relative suitability in different circumstances and the expected level of confidence in their results. The paper develops a mathematical framework that maps curve-fitting techniques onto the available data for conventional oil and highlights the critical importance of the so-called 'reserve growth'. It then summarizes the historical origins, contemporary application and strengths and weaknesses of each group of curve-fitting techniques and uses illustrative data from a number of oil-producing regions to explore the extent to which these techniques provide consistent estimates of recoverable resources. The paper argues that the applicability of curve-fitting techniques is more limited than adherents claim, the confidence bounds on the results are wider than commonly assumed and the techniques have a tendency to underestimate recoverable resources. PMID:24298074
The conical fit approach to modeling ionospheric total electron content
NASA Technical Reports Server (NTRS)
Sparks, L.; Komjathy, A.; Mannucci, A. J.
2002-01-01
The Global Positioning System (GPS) can be used to measure the integrated electron density along raypaths between satellites and receivers. Such measurements may, in turn, be used to construct regional and global maps of the ionospheric total electron content (TEC). Maps are generated by fitting measurements to an assumed ionospheric model.
Determination of a melting curve using the one-phase approach
NASA Astrophysics Data System (ADS)
Fuchizaki, Kazuhiro; Okamoto, Kazuma
2016-01-01
The melting curve of the modified Lennard-Jones solid is derived using a one-phase approach. The Padé approximation employed for solving the melting-curve equation converges at the middle stage, giving rise to the well-known Simon curve that satisfactorily captures the actual melting curve found from a molecular dynamics simulation over a pressure range of four orders of magnitude. This situation is justified because the solid under consideration was shown to satisfy the thermodynamic condition under which Simon's curve becomes exact.
A new approach to the analysis of Mira light curves
NASA Technical Reports Server (NTRS)
Mennessier, M. O.; Barthes, D.; Mattei, J. A.
1990-01-01
Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules.
Guo, Lianping; Tian, Shulin; Jiang, Jun
2015-03-01
This paper proposes an algorithm to estimate the channel mismatches in time-interleaved analog-to-digital converter (TIADC) based on fractional delay (FD) and sine curve fitting. Choose one channel as the reference channel and apply FD to the output samples of reference channel to obtain the ideal samples of non-reference channels with no mismatches. Based on least square method, the sine curves are adopted to fit the ideal and the actual samples of non-reference channels, and then the mismatch parameters can be estimated by comparing the ideal sine curves and the actual ones. The principle of this algorithm is simple and easily understood. Moreover, its implementation needs no extra circuits, lowering the hardware cost. Simulation results show that the estimation accuracy of this algorithm can be controlled within 2%. Finally, the practicability of this algorithm is verified by the measurement results of channel mismatch errors of a two-channel TIADC prototype. PMID:25832264
Perrella, F W
1988-11-01
EZ-FIT, an interactive microcomputer software package, has been developed for the analysis of enzyme kinetic and equilibrium binding data. EZ-FIT was designed as a user-friendly menu-driven package that has the facility for data entry, editing, and filing. Data input permits the conversion of cpm, dpm, or optical density to molar per minute per milligram protein. Data can be fit to any of 14 model equations including Michaelis-Menten, Hill, isoenzyme, inhibition, dual substrate, agonist, antagonist, and modified integrated Michaelis-Menten. The program uses the Nelder-Mead simplex and Marquardt nonlinear regression algorithms sequentially. A report of the results includes the parameter estimates with standard errors, a Student t test to determine the accuracy of the parameter values, a Runs statistic test of the residuals, identification of outlying data, an Akaike information criterion test for goodness-of-fit, and, when the experimental variance is included, a chi 2 statistic test for goodness-of-fit. Several different graphs can be displayed: an X-Y, a Scatchard, an Eadie-Hofstee, a Lineweaver-Burk, a semilogarithmic, and a residual plot. A data analysis report and graphs are designed to evaluate the goodness-of-fit of the data to a particular model. PMID:3239747
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
A computational approach to the twin paradox in curved spacetime
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng
2016-09-01
Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
Riggs, H.C.
1968-01-01
This manual describes graphical and mathematical procedures for preparing frequency curves from samples of hydrologic data. It also discusses the theory of frequency curves, compares advantages of graphical and mathematical fitting, suggests methods of describing graphically defined frequency curves analytically, and emphasizes the correct interpretations of a frequency curve.
Sakurai-Yageta, Mika; Maruyama, Tomoko; Suzuki, Takashi; Ichikawa, Kazuhisa; Murakami, Yoshinori
2015-01-01
Protein components of cell adhesion machinery show continuous renewal even in the static state of epithelial cells and participate in the formation and maintenance of normal epithelial architecture and tumor suppression. CADM1 is a tumor suppressor belonging to the immunoglobulin superfamily of cell adhesion molecule and forms a cell adhesion complex with an actin-binding protein, 4.1B, and a scaffold protein, MPP3, in the cytoplasm. Here, we investigate dynamic regulation of the CADM1-4.1B-MPP3 complex in mature cell adhesion by fluorescence recovery after photobleaching (FRAP) analysis. Traditional FRAP analysis were performed for relatively short period of around 10min. Here, thanks to recent advances in the sensitive laser detector systems, we examine FRAP of CADM1 complex for longer period of 60 min and analyze the recovery with exponential curve-fitting to distinguish the fractions with different diffusion constants. This approach reveals that the fluorescence recovery of CADM1 is fitted to a single exponential function with a time constant (τ) of approximately 16 min, whereas 4.1B and MPP3 are fitted to a double exponential function with two τs of approximately 40-60 sec and 16 min. The longer τ is similar to that of CADM1, suggesting that 4.1B and MPP3 have two distinct fractions, one forming a complex with CADM1 and the other present as a free pool. Fluorescence loss in photobleaching analysis supports the presence of a free pool of these proteins near the plasma membrane. Furthermore, double exponential fitting makes it possible to estimate the ratio of 4.1B and MPP3 present as a free pool and as a complex with CADM1 as approximately 3:2 and 3:1, respectively. Our analyses reveal a central role of CADM1 in stabilizing the complex with 4.1B and MPP3 and provide insight in the dynamics of adhesion complex formation. PMID:25780926
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
Physical fitness: An operator's approach to coping with shift work
Hanks, D.H.
1989-01-01
There is a strong correlation between a shift worker's ability to remain alert and the physical fitness of the individual. Alertness is a key element of a nuclear plant operator's ability to effectively monitor and control plant status. The constant changes in one's metabolism caused by the rotation of work (and sleep) hours can be devastating to his or her health. Many workers with longevity in the field, however, have found it beneficial to maintain some sort of workout or sport activity, feeling that this activity offsets the physical burden of backshift. The author's experience working shifts for 10 years and his reported increase in alertness through exercise and diet manipulation are described in this paper.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Neitzel, Anne-Christin; Stamer, Eckhard; Junge, Wolfgang; Thaller, Georg
2015-05-01
Laboratory somatic cell count (LSCC) records are usually recorded monthly and provide an important information source for breeding and herd management. Daily milk viscosity detection in composite milking (expressed as drain time) with an automated on-line California Mastitis Test (CMT) could serve immediately as an early predictor of udder diseases and might be used as a selection criterion to improve udder health. The aim of the present study was to clarify the relationship between the well-established LSCS and the new trait,'drain time', and to estimate their correlations to important production traits. Data were recorded on the dairy research farm Karkendamm in Germany. Viscosity sensors were installed on every fourth milking stall in the rotary parlour to measure daily drain time records. Weekly LSCC and milk composition data were available. Two data sets were created containing records of 187,692 milkings from 320 cows (D1) and 25,887 drain time records from 311 cows (D2). Different fixed effect models, describing the log-transformed drain time (logDT), were fitted to achieve applicable models for further analysis. Lactation curves were modelled with standard parametric functions (Ali and Schaeffer, Legendre polynomials of second and third degree) of days in milk (DIM). Random regression models were further applied to estimate the correlations between cow effects between logDT and LSCS with further important production traits. LogDT and LSCS were strongest correlated in mid-lactation (r = 0.78). Correlations between logDT and production traits were low to medium. Highest correlations were reached in late lactation between logDT and milk yield (r = -0.31), between logDT and protein content (r = 0.30) and in early as well as in late lactation between logDT and lactose content (r = -0.28). The results of the present study show that the drain time could be used as a new trait for daily mastitis control. PMID:25731191
A Global Fitting Approach For Doppler Broadening Thermometry
NASA Astrophysics Data System (ADS)
Amodio, Pasquale; Moretti, Luigi; De Vizia, Maria Domenica; Gianfrani, Livio
2014-06-01
Very recently, a spectroscopic determination of the Boltzmann constant, kB, has been performed at the Second University of Naples by means of a rather sophisticated implementation of Doppler Broadening Thermometry (DBT)1. Performed on a 18O-enriched water sample, at a wavelength of 1.39 µm, the experiment has provided a value for kB with a combined uncertainty of 24 parts over 106, which is the best result obtained so far, by using an optical method. In the spectral analysis procedure, the partially correlated speed-dependent hard-collision (pC-SDHC) model was adopted. The uncertainty budget has clearly revealed that the major contributions come from the statistical uncertainty (type A) and from the uncertainty associated to the line-shape model (type B)2. In the present work, we present the first results of a theoretical and numerical work aimed at reducing these uncertainty components. It is well known that molecular line shapes exhibit clear deviations from the time honoured Voigt profile. Even in the case of a well isolated spectral line, under the influence of binary collisions, in the Doppler regime, the shape can be quite complicated by the joint occurrence of velocity-change collisions and speed-dependent effects. The partially correlated speed-dependent Keilson-Storer profile (pC-SDKS) has been recently proposed as a very realistic model, capable of reproducing very accurately the absorption spectra for self-colliding water molecules, in the near infrared3. Unfortunately, the model is so complex that it cannot be implemented into a fitting routine for the analysis of experimental spectra. Therefore, we have developed a MATLAB code to simulate a variety of H218O spectra in thermodynamic conditions identical to the one of our DBT experiment, using the pC-SDKS model. The numerical calculations to determine such a profile have a very large computational cost, resulting from a very sophisticated iterative procedure. Hence, the numerically simulated spectra
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
Mac, Amy; Rhodes, Gillian; Webster, Michael A.
2015-01-01
Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects. PMID:27551353
ERIC Educational Resources Information Center
Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex
2009-01-01
The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the organization;…
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Taubman, Hadas; Vaadia, Eilon; Paz, Rony; Chechik, Gal
2013-06-01
Neural responses are commonly studied in terms of "tuning curves," characterizing changes in neuronal response as a function of a continuous stimulus parameter. In the motor system, neural responses to movement direction often follow a bell-shaped tuning curve for which the exact shape determines the properties of neuronal movement coding. Estimating the shape of that tuning curve robustly is hard, especially when directions are sampled unevenly and at a coarse resolution. Here, we describe a Bayesian estimation procedure that improves the accuracy of curve-shape estimation even when the curve is sampled unevenly and at a very coarse resolution. Using this approach, we characterize the movement direction tuning curves in the supplementary motor area (SMA) of behaving monkeys. We compare the SMA tuning curves to tuning curves of neurons from the primary motor cortex (M1) of the same monkeys, showing that the tuning curves of the SMA neurons tend to be narrower and shallower. We also show that these characteristics do not depend on the specific location in each region. PMID:23468391
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Consensus among flexible fitting approaches improves the interpretation of cryo-EM data.
Ahmed, Aqeel; Whitford, Paul C; Sanbonmatsu, Karissa Y; Tama, Florence
2012-02-01
Cryo-elecron microscopy (cryo-EM) can provide important structural information of large macromolecular assemblies in different conformational states. Recent years have seen an increase in structures deposited in the Protein Data Bank (PDB) by fitting a high-resolution structure into its low-resolution cryo-EM map. A commonly used protocol for accommodating the conformational changes between the X-ray structure and the cryo-EM map is rigid body fitting of individual domains. With the emergence of different flexible fitting approaches, there is a need to compare and revise these different protocols for the fitting. We have applied three diverse automated flexible fitting approaches on a protein dataset for which rigid domain fitting (RDF) models have been deposited in the PDB. In general, a consensus is observed in the conformations, which indicates a convergence from these theoretically different approaches to the most probable solution corresponding to the cryo-EM map. However, the result shows that the convergence might not be observed for proteins with complex conformational changes or with missing densities in cryo-EM map. In contrast, RDF structures deposited in the PDB can represent conformations that not only differ from the consensus obtained by flexible fitting but also from X-ray crystallography. Thus, this study emphasizes that a "consensus" achieved by the use of several automated flexible fitting approaches can provide a higher level of confidence in the modeled configurations. Following this protocol not only increases the confidence level of fitting, but also highlights protein regions with uncertain fitting. Hence, this protocol can lead to better interpretation of cryo-EM data. PMID:22019767
Consensus among flexible fitting approaches improves the interpretation of cryo-EM data
Ahmed, Aqeel; Whitford, Paul C.; Sanbonmatsu, Karissa Y.; Tama, Florence
2011-01-01
Cryo-elecron microscopy (Cryo-EM) can provide important structural information of large macromolecular assemblies in different conformational states. Recent years have seen an increase in structures deposited in the Protein Data Bank (PDB) by fitting a high-resolution structure into its low-resolution cryo-EM map. A commonly used protocol for accommodating the conformational changes between the X-ray structure and the cryo-EM map is rigid body fitting of individual domains. With the emergence of different flexible fitting approaches, there is a need to compare and revise these different protocols for the fitting. We have applied three diverse automated flexible fitting approaches on a protein dataset for which rigid domain fitting (RDF) models have been deposited in the PDB. In general, a consensus is observed in the conformations, which indicates a convergence from these theoretically different approaches to the most probable solution corresponding to the cryo-EM map. However, the result shows that the convergence might not be observed for proteins with complex conformational changes or with missing densities in cryo-EM map. In contrast, RDF structures deposited in the PDB can represent conformations that not only differ from the consensus obtained by flexible fitting but also from X-ray crystallography. Thus, this study emphasizes that a “consensus” achieved by the use of several automated flexible fitting approaches can provide a higher level of confidence in the modeled configurations. Following this protocol not only increases the confidence level of fitting, but also highlights protein regions with uncertain fitting. Hence, this protocol can lead to better interpretation of cryo-EM data. PMID:22019767
A new approach for magnetic curves in 3D Riemannian manifolds
Bozkurt, Zehra Gök, Ismail Yaylı, Yusuf Ekmekci, F. Nejat
2014-05-15
A magnetic field is defined by the property that its divergence is zero in a three-dimensional oriented Riemannian manifold. Each magnetic field generates a magnetic flow whose trajectories are curves called as magnetic curves. In this paper, we give a new variational approach to study the magnetic flow associated with the Killing magnetic field in a three-dimensional oriented Riemann manifold, (M{sup 3}, g). And then, we investigate the trajectories of the magnetic fields called as N-magnetic and B-magnetic curves.
ERIC Educational Resources Information Center
Rousseau, Ronald
1994-01-01
Discussion of informetric distributions shows that generalized Leimkuhler functions give proper fits to a large variety of Bradford curves, including those exhibiting a Groos droop or a rising tail. The Kolmogorov-Smirnov test is used to test goodness of fit, and least-square fits are compared with Egghe's method. (Contains 53 references.) (LRW)
NASA Astrophysics Data System (ADS)
Bhattacharya, Kolahal; Banerjee, Sudeshna; Mondal, Naba K.
2016-07-01
In the context of track fitting problems by a Kalman filter, the appropriate functional forms of the elements of the random process noise matrix are derived for tracking through thick layers of dense materials and magnetic field. This work complements the form of the process noise matrix obtained by Mankel [1].
Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610
An Intuitive Approach to Geometric Continuity for Parametric Curves and Surfaces (Extended Abstract)
NASA Technical Reports Server (NTRS)
Derose, T. D.; Barsky, B. A.
1985-01-01
The notion of geometric continuity is extended to an arbitrary order for curves and surfaces, and an intuitive development of constraints equations is presented that are necessary for it. The constraints result from a direct application of the univariate chain rule for curves, and the bivariate chain rule for surfaces. The constraints provide for the introduction of quantities known as shape parameters. The approach taken is important for several reasons: First, it generalizes geometric continuity to arbitrary order for both curves and surfaces. Second, it shows the fundamental connection between geometric continuity of curves and geometric continuity of surfaces. Third, due to the chain rule derivation, constraints of any order can be determined more easily than derivations based exclusively on geometric measures.
Ristanović, D; Ristanović, D; Malesević, J; Milutinović, B
1983-01-01
Plasma kinetics of bromsulphalein (BSP) after a single injection into the bloodstream of the rat with total obstruction of the common bile duct was examined. The concentrations of BSP were determined colorimetrically. A monoexponential plus a general first-degree function in time with four unknown parameters was fitted. Two programs were developed for the Texas Instruments 59 programmable calculator to estimate the values of all the parameters by an iteration procedure. The programs executed at about twice normal speed. PMID:6617168
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations. PMID:26952004
NASA Technical Reports Server (NTRS)
Degnan, J. J.; Walker, H. E.; Mcelroy, J. H.; Mcavoy, N.; Zagwodski, T.
1972-01-01
A least squares curve-fitting algorithm is derived which allows the simultaneous estimation of the small signal gain and the saturation intensity from an arbitrary number of data points relating power output to the incidence angle of an internal coupling plate. The method is used to study the dependence of the two parameters on tube pressure and discharge current in a waveguide CO2 laser having a 2 mm diameter capillary. It is found that, at pressures greater than 28 torr, rising CO2 temperature degrades the small signal gain at current levels as low as three milliamperes.
Schiff, J D
1983-05-01
Equations of the Michaelis-Menten form are frequently encountered in a number of areas of biochemical and pharmacological research. A program is presented for use on the programmable TI-59 calculator with added printer which performs an iterative least-squares fit of up to 80 data pairs to this equation and estimates the standard deviations and standard errors of the determined parameters. The program assigns equal weights to errors over the entire data range and is thus appropriate for situations in which data precision is independent of amplitude. PMID:6874133
Devereux, Mike; Gresh, Nohad; Piquemal, Jean-Philip; Meuwly, Markus
2014-08-01
A supervised, semiautomated approach to force field parameter fitting is described and applied to the SIBFA polarizable force field. The I-NoLLS interactive, nonlinear least squares fitting program is used as an engine for parameter refinement while keeping parameter values within a physical range. Interactive fitting is shown to avoid many of the stability problems that frequently afflict highly correlated, nonlinear fitting problems occurring in force field parametrizations. The method is used to obtain parameters for the H2O, formamide, and imidazole molecular fragments and their complexes with the Mg(2+) cation. Reference data obtained from ab initio calculations using an auc-cc-pVTZ basis set exploit advances in modern computer hardware to provide a more accurate parametrization of SIBFA than has previously been available. PMID:24965869
Understanding the distribution of fitness effects of mutations by a biophysical-organismal approach
NASA Astrophysics Data System (ADS)
Bershtein, Shimon
2011-03-01
The distribution of fitness effects of mutations is central to many questions in evolutionary biology. However, it remains poorly understood, primarily due to the fact that a fundamental connection that exists between the fitness of organisms and molecular properties of proteins encoded by their genomes is largely overlooked by traditional research approaches. Past efforts to breach this gap followed the ``evolution first'' paradigm, whereby populations were subjected to selection under certain conditions, and mutations which emerged in adapted populations were analyzed using genomic approaches. The results obtained in the framework of this approach, while often useful, are not easily interpretable because mutations get fixed due to a convolution of multiple causes. We have undertaken a conceptually opposite strategy: Mutations with known biophysical and biochemical effects on E. coli's essential proteins (based on computational analysis and in vitro measurements) were introduced into the organism's chromosome and the resulted fitness effects were monitored. Studying the distribution of fitness effects of such fully controlled replacements revealed a very complex fitness landscape, where impact of the microscopic properties of the mutated proteins (folding, stability, and function) is modulated on a macroscopic, whole genome level. Furthermore, the magnitude of the cellular response to the introduced mutations seems to depend on the thermodynamic status of the mutant.
A Selective Refinement Approach for Computing the Distance Functions of Curves
Laney, D A; Duchaineau, M A; Max, N L
2000-12-01
We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.
NASA Astrophysics Data System (ADS)
Westberg, Jonas; Wang, Junyang; Axner, Ove
2012-11-01
Wavelength modulation (WM) produces lock-in signals that are proportional to various Fourier coefficients of the modulated lineshape function of the molecular transition targeted. Unlike the case for the Lorentzian lineshape function, there is no known analytical expression for the Fourier coefficients of a modulated Voigt lineshape function; they consist of nested integrals that have to be solved numerically, which is often time-consuming and prevents real-time curve fitting. Previous attempts to overcome these limitations have so far consisted of approximations of the Voigt lineshape function, which brings in inaccuracies. In this paper we demonstrate a new means to calculate the lineshape of nf-WM absorption signals from a transition with a Voigt profile. It is shown that the signal can conveniently be expressed as a convolution of one or several Fourier coefficients of a modulated Lorentzian lineshape function, for which there are analytical expressions, and the Maxwell-Boltzmann velocity distribution for the system under study. Mathematically, the procedure involves no approximations, wherefore its accuracy is limited only by the numerical precision of the software used (in this case ˜10-16) while the calculation time is reduced by roughly three orders of magnitude (10-3) as compared to the conventional methodology, i.e. typically from the second to the millisecond range. This makes feasible real-time curve fitting to lock-in output signals from modulated Voigt profiles.
ATWS Analysis with an Advanced Boiling Curve Approach within COBRA 3-CP
Gensler, A.; Knoll, A.; Kuehnel, K.
2007-07-01
In 2005 the German Reactor Safety Commission issued specific requirements on core coolability demonstration for PWR ATWS (anticipated transients without scram). Thereupon AREVA NP performed detailed analyses for all German PWRs. For a German KONVOI plant the results of an ATWS licensing analysis are presented. The plant dynamic behavior is calculated with NLOOP, while the hot channel analysis is performed with the thermal hydraulic computer code COBRA 3-CP. The application of the fuel rod model included in COBRA 3-CP is essential for this type of analysis. Since DNB (departure from nucleate boiling) occurs, the advanced post DNB model (advanced boiling curve approach) of COBRA 3-CP is used. The results are compared with those gained with the standard BEEST model. The analyzed ATWS case is the emergency power case 'loss of main heat sink with station service power supply unavailable'. Due to the decreasing coolant flow rate during the transient the core attains film boiling conditions. The results of the hot channel analysis strongly depend on the performance of the boiling curve model. The BEEST model is based on pool boiling conditions whereas typical PWR conditions - even in most transients - are characterized by forced flow for which the advanced boiling curve approach is particularly suitable. Compared with the BEEST model the advanced boiling curve approach in COBRA 3-CP yields earlier rewetting, i.e. a shorter period in film boiling. Consequently, the fuel rod cladding temperatures, that increase significantly due to film boiling, drop back earlier and the high temperature oxidation is significantly diminished. The Baker-Just-Correlation was used to calculate the value of equivalent cladding reacted (ECR), i.e. the reduction of cladding thickness due to corrosion throughout the transient. Based on the BEEST model the ECR value amounts to 0.4% whereas the advanced boiling curve only leads to an ECR value of 0.2%. Both values provide large margins to the 17
Hill, K.
1988-06-01
The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.
ERIC Educational Resources Information Center
Chen, Zheng; Powell, Gary N.; Greenhaus, Jeffrey H.
2009-01-01
This study adopted a person-environment fit approach to examine whether greater congruence between employees' preferences for segmenting their work domain from their family domain (i.e., keeping work matters at work) and what their employers' work environment allowed would be associated with lower work-to-family conflict and higher work-to-family…
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
A flight investigation with a STOL airplane flying curved, descending instrument approach paths
NASA Technical Reports Server (NTRS)
Benner, M. S.; Mclaughlin, M. D.; Sawyer, R. H.; Vangunst, R.; Ryan, J. L.
1974-01-01
A flight investigation using a De Havilland Twin Otter airplane was conducted to determine the configurations of curved, 6 deg descending approach paths which would provide minimum airspace usage within the requirements for acceptable commercial STOL airplane operations. Path configurations with turns of 90 deg, 135 deg, and 180 deg were studied; the approach airspeed was 75 knots. The length of the segment prior to turn, the turn radius, and the length of the final approach segment were varied. The relationship of the acceptable path configurations to the proposed microwave landing system azimuth coverage requirements was examined.
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Duval, M; Guilarte Moreno, V; Grün, R
2013-12-01
This work deals with the specific studies of three main sources of uncertainty in electron spin resonance (ESR) dosimetry/dating of fossil tooth enamel: (1) the precision of the ESR measurements, (2) the long-term signal fading the selection of the fitting function. They show a different influence on the equivalent dose (D(E)) estimates. Repeated ESR measurements were performed on 17 different samples: results show a mean coefficient of variation of the ESR intensities of 1.20 ± 0.23 %, inducing a mean relative variability of 3.05 ± 2.29 % in the D(E) values. ESR signal fading over 5 y was also observed: its magnitude seems to be quite sample dependant but is nevertheless especially important for the most irradiated aliquots. This fading has an apparent random effect on the D(E) estimates. Finally, the authors provide new insights and recommendations about the fitting of ESR dose-response curves of fossil enamel with a double saturating exponential (DSE) function. The potential of a new variation of the DSE was also explored. Results of this study also show that the choice of the fitting function is of major importance, maybe more than the other sources previously mentioned, in order to get accurate final D(E) values. PMID:23832975
Aerobic fitness ecological validity in elite soccer players: a metabolic power approach.
Manzi, Vincenzo; Impellizzeri, Franco; Castagna, Carlo
2014-04-01
The aim of this study was to examine the association between match metabolic power (MP) categories and aerobic fitness in elite-level male soccer players. Seventeen male professional soccer players were tested for VO2max, maximal aerobic speed (MAS), VO2 at ventilatory threshold (VO2VT and %VO2VT), and speed at a selected blood lactate concentration (4 mmol·L(-1), V(L4)). Aerobic fitness tests were performed at the end of preseason and after 12 and 24 weeks during the championship. Aerobic fitness and MP variables were considered as mean of all seasonal testing and of 16 Championship home matches for all the calculations, respectively. Results showed that VO2max (from 0.55 to 0.68), MAS (from 0.52 to 0.72), VO2VT (from 0.72 to 0.83), %VO2maxVT (from 0.62 to 0.65), and V(L4) (from 0.56 to 0.73) were significantly (p < 0.05 to 0.001) large to very large associated with MP variables. These results provide evidence to the ecological validity of aerobic fitness in male professional soccer. Strength and conditioning professionals should consider aerobic fitness in their training program when dealing with professional male soccer players. The MP method resulted an interesting approach for tracking external load in male professional soccer players. PMID:24345968
B-737 flight test of curved-path and steep-angle approaches using MLS guidance
NASA Technical Reports Server (NTRS)
Branstetter, J. R.; White, W. F.
1989-01-01
A series of flight tests were conducted to collect data for jet transport aircraft flying curved-path and steep-angle approaches using Microwave Landing System (MLS) guidance. During the test, 432 approaches comprising seven different curved-paths and four glidepath angles varying from 3 to 4 degrees were flown in NASA Langley's Boeing 737 aircraft (Transport Systems Research Vehicle) using an MLS ground station at the NASA Wallops Flight Facility. Subject pilots from Piedmont Airlines flew the approaches using conventional cockpit instrumentation (flight director and Horizontal Situation Indicator (HSI). The data collected will be used by FAA procedures specialists to develop standards and criteria for designing MLS terminal approach procedures (TERPS). The use of flight simulation techniques greatly aided the preliminary stages of approach development work and saved a significant amount of costly flight time. This report is intended to complement a data report to be issued by the FAA Office of Aviation Standards which will contain all detailed data analysis and statistics.
CADAVERIC STUDY ON THE LEARNING CURVE OF THE TWO-APPROACH GANZ PERIACETABULAR OSTEOTOMY
Ferro, Fernando Portilho; Ejnisman, Leandro; Miyahara, Helder Souza; Trindade, Christiano Augusto de Castro; Faga, Antônio; Vicente, José Ricardo Negreiros
2016-01-01
Objective : The Bernese periacetabular osteotomy (PAO) is a widely used technique for the treatment of non-arthritic, dysplastic, painful hips. It is considered a highly complex procedure with a steep learning curve. In an attempt to minimize complications, a double anterior-posterior approach has been described. We report on our experience while performing this technique on cadaveric hips followed by meticulous dissection to verify possible complications. Methods : We operated on 15 fresh cadaveric hips using a combined posterior Kocher-Langenbeck and an anterior Smith-Petersen approach, without fluoroscopic control. The PAO cuts were performed and the acetabular fragment was mobilized. A meticulous dissection was carried out to verify the precision of the cuts. Results : Complications were observed in seven specimens (46%). They included a posterior column fracture, and posterior and anterior articular fractures. The incidence of complications decreased over time, from 60% in the first five procedures to 20% in the last five procedures. Conclusions : We concluded that PAO using a combined anterior-posterior approach is a reproducible technique that allows all cuts to be done under direct visualization. The steep learning curve described in the classic single incision approach was also observed when using two approaches. Evidence Level: IV, Cadaveric Study. PMID:26981046
Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E
2012-12-01
An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures. PMID:22615163
A projection-based approach to diffraction tomography on curved boundaries
Clement, Gregory T.
2014-01-01
An approach to diffraction tomography is investigated for two-dimensional image reconstruction of objects surrounded by an arbitrarily-shaped curve of sources and receivers. Based on the integral theorem of Helmholtz and Kirchhoff, the approach relies upon a valid choice of the Green’s functions for selected conditions along the (possibly-irregular) boundary. This allows field projections from the receivers to an arbitrary external location. When performed over all source locations, it will be shown that the field caused by a hypothetical source at this external location is also known along the boundary. This field can then be projected to new external points that may serve as a virtual receiver. Under such a reformation, data may be put in a form suitable for image construction by synthetic aperture methods. Foundations of the approach are shown, followed by a mapping technique optimized for the approach. Examples formed from synthetic data are provided. PMID:25598570
The New Vector Fitting Approach to Multiple Convex Obstacles Modeling for UWB Propagation Channels
NASA Astrophysics Data System (ADS)
Górniak, P.; Bandurski, W.
This chapter presents the new approach to time-domain modeling of UWB channels containing multiple convex obstacles. Vector fitting (VF) algorithm (rational approximation) was used for deriving the closed form impulse response of multiple diffraction ray creeping on a cascade of convex obstacles. VF algorithm was performed with respect to new generalized variables proportional to frequency but including geometrical parameters of the obstacles also. The limits of approximation domain for vector fitting algorithm follow the range of ultra-wideband (UWB) channel parameters that can be met in practical UWB channel scenarios. Finally, the closed form impulse response of a creeping UTD ray was obtained. As the result we obtained impulse response of the channel as a function of normalized, with respect to geometrical parameters of the obstacles, time. It permits for calculation of channel responses for various objects without changing the body of a rational function. In that way the presented approach is general, simple, and effective.
Lin, Shan-Yang; Lin, Hong-Liang; Chi, Ying-Ting; Huang, Yu-Ting; Kao, Chi-Yu; Hsieh, Wei-Hsien
2015-12-30
The amorphous form of a drug has higher water solubility and faster dissolution rate than its crystalline form. However, the amorphous form is less thermodynamically stable and may recrystallize during manufacturing and storage. Maintaining the amorphous state of drug in a solid dosage form is extremely important to ensure product quality. The purpose of this study was to quantitatively determine the amount of amorphous indomethacin (INDO) formed in the Soluplus® solid dispersions using thermoanalytical and Fourier transform infrared (FTIR) spectral curve-fitting techniques. The INDO/Soluplus® solid dispersions with various weight ratios of both components were prepared by air-drying and heat-drying processes. A predominate IR peak at 1683cm(-1) for amorphous INDO was selected as a marker for monitoring the solid state of INDO in the INDO/Soluplus® solid dispersions. The physical stability of amorphous INDO in the INDO/Soluplus® solid dispersions prepared by both drying processes was also studied under accelerated conditions. A typical endothermic peak at 161°C for γ-form of INDO (γ-INDO) disappeared from all the differential scanning calorimetry (DSC) curves of INDO/Soluplus® solid dispersions, suggesting the amorphization of INDO caused by Soluplus® after drying. In addition, two unique IR peaks at 1682 (1681) and 1593 (1591)cm(-1) corresponded to the amorphous form of INDO were observed in the FTIR spectra of all the INDO/Soluplus® solid dispersions. The quantitative amounts of amorphous INDO formed in all the INDO/Soluplus® solid dispersions were increased with the increase of γ-INDO loaded into the INDO/Soluplus® solid dispersions by applying curve-fitting technique. However, the intermolecular hydrogen bonding interaction between Soluplus® and INDO were only observed in the samples prepared by heat-drying process, due to a marked spectral shift from 1636 to 1628cm(-1) in the INDO/Soluplus® solid dispersions. The INDO/Soluplus® solid
An evolutionary approach to modelling the soil-water characteristic curve in unsaturated soils
NASA Astrophysics Data System (ADS)
Ahangar-Asr, A.; Johari, A.; Javadi, A. A.
2012-06-01
In this paper a new approach is presented based on evolutionary polynomial regression (EPR) for modelling of soil-water characteristic curve in unsaturated soils. EPR is an evolutionary data mining technique that generates a transparent and structured representation of the behaviour of a system directly from data. This method can operate on large quantities of data in order to capture nonlinear and complex relationships between variables of the system. It also has the additional advantage that it allows the user to gain insight into the behaviour of the system. Results from pressure plate tests carried out on clay, silty clay, sandy loam, and loam are used for developing and validating the EPR model. The model inputs are the initial void ratio, initial gravimetric water content, logarithm of suction normalised with respect to atmospheric air pressure, clay content, and silt content. The model output is the gravimetric water content corresponding to the assigned input suction. The EPR model predictions are compared with the experimental results as well as the models proposed by previous researches. The results show that the proposed approach is very effective and robust in modelling the soil-water characteristic curve in unsaturated soils. The merits and advantages of the proposed approach are highlighted.
NASA Technical Reports Server (NTRS)
Benner, M. S.; Sawyer, R. H.; Mclaughlin, M. D.
1973-01-01
A real-time, fixed-base simulation study has been conducted to determine the curved, descending approach paths (within passenger-comfort limits) that would be acceptable to pilots, the flight-director-system logic requirements for curved-flight-path guidance, and the paths which can be flown within proposed microwave landing system (MLS) coverage angles. Two STOL aircraft configurations were used in the study. Generally, no differences in the results between the two STOL configurations were found. The investigation showed that paths with a 1828.8 meter turn radius and a 1828.8 meter final-approach distance were acceptable without winds and with winds up to at least 15 knots for airspeeds from 75 to 100 knots. The altitude at roll-out from the final turn determined which final-approach distances were acceptable. Pilots preferred to have an initial straight leg of about 1 n. mi. after MLS guidance acquisition before turn intercept. The size of the azimuth coverage angle necessary to meet passenger and pilot criteria depends on the size of the turn angle: plus or minus 60 deg was adequate to cover all paths execpt ones with a 180 deg turn.
Corvettes, Curve Fitting, and Calculus
ERIC Educational Resources Information Center
Murawska, Jaclyn M.; Nabb, Keith A.
2015-01-01
Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A.
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus “infusing” age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs. PMID:26697422
Real time refractivity from clutter using a best fit approach improved with physical information
NASA Astrophysics Data System (ADS)
Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph
2010-02-01
Refractivity from clutter (RFC) retrieves the radio frequency refractive conditions along a propagation path by inverting the measured radar sea clutter return. In this paper, a real-time RFC technique is proposed called "Improved Best Fit" (IBF). It is based on finding the environment with best fit to one of many precomputed, modeled radar returns for different environments in a database. The method is improved by considering the mean slope of the propagation factor, and physical considerations are added: smooth variations of refractive conditions with azimuth and smooth variations of duct height with range. The approach is tested on data from 1998 Wallops Island, Virginia, measurement campaign with good results on most of the data, and questionable results are detected with a confidence criterion. A comparison between the refractivity structures measured during the measurement campaign and the ones retrieved by inversion shows a good match. Radar coverage simulations obtained from these inverted refractivity structures demonstrate the potential utility of IBF.
Transfer function approach based on simulation results for the determination of pod curves
NASA Astrophysics Data System (ADS)
Demeyer, S.; Jenson, F.; Dominguez, N.; Iakovleva, E.
2012-05-01
POD curves estimations are based on statistical studies of empirical data which are obtained thru costly and time consuming experimental campaigns. Currently, cost reduction of POD trials is a major issue. A proposed solution is to replace some of the experimental data required to determine the POD with model based results. Following this idea, the concept of Model Assisted POD (MAPOD) has been introduced first in the US in 2004 through the constitution of the MAPOD working group. One approach to Model Assisted POD is based on a transfer function which uses empirical data and models to transfer POD measured for one specific application to another related application. The objective of this paper is to show how numerical simulations could help to determine such transfer functions. A practical implementation of the approach to a high frequency eddy current inspection for fatigue cracks is presented. Empirical data is available for the titanium alloy plates. A model based transfer function is used to assess a POD curve for the inspection of aluminum components.
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.
2015-12-01
Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.
A universal approach to the calculation of the transit light curves
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2013-07-01
We have developed a universal approach to compute accurately the brightness of eclipsing binary systems during the transit of a planet in front of the stellar disc. This approach is uniform for all values of the system parameters and applicable to most limb-darkening laws used in astrophysics. In the cases of the linear and quadratic limb-darkening laws, we obtained analytical expressions for the light curve and its derivatives in terms of elementary functions, elliptic integrals and a piecewise-defined function of one variable. In the cases of the logarithmic and square-root laws of limb darkening, the flux and its derivatives were expressed in terms of integrals which can be efficiently computed using the Gaussian quadrature formula, taking into account singularities of the integrand.
Permutation invariant polynomial neural network approach to fitting potential energy surfaces
NASA Astrophysics Data System (ADS)
Jiang, Bin; Guo, Hua
2013-08-01
A simple, general, and rigorous scheme for adapting permutation symmetry in molecular systems is proposed and tested for fitting global potential energy surfaces using neural networks (NNs). The symmetry adaptation is realized by using low-order permutation invariant polynomials (PIPs) as inputs for the NNs. This so-called PIP-NN approach is applied to the H + H2 and Cl + H2 systems and the analytical potential energy surfaces for these two systems were accurately reproduced by PIP-NN. The accuracy of the NN potential energy surfaces was confirmed by quantum scattering calculations.
Burnham, A K
2006-05-17
Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.
The BestFIT trial: A SMART approach to developing individualized weight loss treatments.
Sherwood, Nancy E; Butryn, Meghan L; Forman, Evan M; Almirall, Daniel; Seburg, Elisabeth M; Lauren Crain, A; Kunin-Batson, Alicia S; Hayes, Marcia G; Levy, Rona L; Jeffery, Robert W
2016-03-01
Behavioral weight loss programs help people achieve clinically meaningful weight losses (8-10% of starting body weight). Despite data showing that only half of participants achieve this goal, a "one size fits all" approach is normative. This weight loss intervention science gap calls for adaptive interventions that provide the "right treatment at the right time for the right person." Sequential Multiple Assignment Randomized Trials (SMART), use experimental design principles to answer questions for building adaptive interventions including whether, how, or when to alter treatment intensity, type, or delivery. This paper describes the rationale and design of the BestFIT study, a SMART designed to evaluate the optimal timing for intervening with sub-optimal responders to weight loss treatment and relative efficacy of two treatments that address self-regulation challenges which impede weight loss: 1) augmenting treatment with portion-controlled meals (PCM) which decrease the need for self-regulation; and 2) switching to acceptance-based behavior treatment (ABT) which boosts capacity for self-regulation. The primary aim is to evaluate the benefit of changing treatment with PCM versus ABT. The secondary aim is to evaluate the best time to intervene with sub-optimal responders. BestFIT results will lead to the empirically-supported construction of an adaptive intervention that will optimize weight loss outcomes and associated health benefits. PMID:26825020
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph
2011-08-01
In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques. PMID:21813354
Fit for purpose? Introducing a rational priority setting approach into a community care setting.
Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale
2016-06-20
Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making. PMID:27296887
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid
2000-06-01
In standard differential geometry, the Fundamental Theorem of Space Curves states that two differential invariants of a curve, namely curvature and torsion, determine its geometry, or equivalently, the isometry class of the curve up to rigid motions in the Euclidean three-dimensional space. Consider a physical model of a space curve made from a sufficiently thin, yet visible rigid wire, and the problem of perceptual identification (by a human observer or a robot) of two given physical model curves. In a previous paper (perceptual geometry) we have emphasized a learning theoretic approach to construct a perceptual geometry of the surfaces in the environment. In particular, we have described a computational method for mathematical representation of objects in the perceptual geometry inspired by the ecological theory of Gibson, and adhering to the principles of Gestalt in perceptual organization of vision. In this paper, we continue our learning theoretic treatment of perceptual geometry of objects, focusing on the case of physical models of space curves. In particular, we address the question of perceptually distinguishing two possibly novel space curves based on observer's prior visual experience of physical models of curves in the environment. The Fundamental Theorem of Space Curves inspires an analogous result in perceptual geometry as follows. We apply learning theory to the statistics of a sufficiently rich collection of physical models of curves, to derive two statistically independent local functions, that we call by analogy, the curvature and torsion. This pair of invariants distinguish physical models of curves in the sense of perceptual geometry. That is, in an appropriate resolution, an observer can distinguish two perceptually identical physical models in different locations. If these pairs of functions are approximately the same for two given space curves, then after possibly some changes of viewing planes, the observer confirms the two are the same.
NASA Technical Reports Server (NTRS)
White, W. F. (Compiler)
1978-01-01
The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.
Aldridge, C.L.; Boyce, M.S.
2007-01-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
Ab initio yield curve dynamics [rapid communication
NASA Astrophysics Data System (ADS)
Hawkins, Raymond J.; Roy Frieden, B.; D'Anna, Joseph L.
2005-09-01
We derive an equation of motion for interest-rate yield curves by applying a minimum Fisher information variational approach to the implied probability density. By construction, solutions to the equation of motion recover observed bond prices. More significantly, the form of the resulting equation explains the success of the Nelson Siegel approach to fitting static yield curves and the empirically observed modal structure of yield curves. A practical numerical implementation of this equation of motion is found by using the Karhunen Lòeve expansion and Galerkin's method to formulate a reduced-order model of yield curve dynamics.
Introducing a Bayesian Approach to Determining Degree of Fit With Existing Rorschach Norms.
Giromini, Luciano; Viglione, Donald J; McCullaugh, Joseph
2015-01-01
This article offers a new methodological approach to investigate the degree of fit between an independent sample and 2 existing sets of norms. Specifically, with a new adaptation of a Bayesian method, we developed a user-friendly procedure to compare the mean values of a given sample to those of 2 different sets of Rorschach norms. To illustrate our technique, we used a small, U.S. community sample of 80 adults and tested whether it resembled more closely the standard Comprehensive System norms (CS 600; Exner, 2003), or a recently introduced, internationally based set of Rorschach norms (Meyer, Erdberg, & Shaffer, 2007 ). Strengths and limitations of this new statistical technique are discussed. PMID:25257792
Fitting additive hazards models for case-cohort studies: a multiple imputation approach.
Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook
2016-07-30
In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26194861
Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.
Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis
2006-02-01
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation. PMID:16705973
Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic
2015-04-01
Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24
A new approach to the analysis of vessel residence time distribution curves
NASA Astrophysics Data System (ADS)
Ferro, Sergio P.; Principe, R. Javier; Goldschmit, Marcela B.
2001-12-01
Mathematical models for the evaluation of residence time distribution (RTD) curves on a large variety of vessels are presented. These models have been constructed by combination of different tanks or volumes. In order to obtain a good representation of RTD curves, a new volume (called convection diffusion volume) is introduced. The convection-diffusion volume allows the approximation of different experimental or numerical RTD curves with very simple models. An algorithm has been developed to calculate the parameters of the models for any given set of RTD curve experimental points. Validation of the models is carried out by comparison with experimental RTD curves taken from the literature and with a numerical RTD curve obtained by three-dimensional simulation of the flow inside a tundish.
A Practical Approach of Curved Ray Prestack Kirchhoff Time Migration on GPGPU
NASA Astrophysics Data System (ADS)
Shi, Xiaohua; Li, Chuang; Wang, Xu; Li, Kang
We introduced four prototypes of General Purpose GPU solutions by Compute Unified Device Architecture (CUDA) on NVidia GeForce 8800GT and Tesla C870 for a practical Curved Ray Prestack Kirchhoff Time Migration program, which is one of the most widely adopted imaging methods in the seismic data processing industry. We presented how to re-design and re-implement the original CPU code to efficient GPU code step by step. We demonstrated optimization methods, such as how to reduce the overhead of memory transportation on PCI-E bus, how to significantly increase the kernel thread numbers on GPU cores, how to buffer the inputs and outputs of CUDA kernel modules, and how to utilize the memory streams to overlap GPU kernel execution time, etc., to improve the runtime performance on GPUs. We analyzed the floating point errors between CPUs and GPUs. We presented the images generated by CPU and GPU programs for the same real-world seismic data inputs. Our final approach of Prototype-IV on NVidia GeForce 8800GT is 16.3 times faster than its CPU version on Intel’s P4 3.0G.
Pluto and Charon Color Light Curves from New Horizons on Approach
NASA Astrophysics Data System (ADS)
Ennico, Kimberly; Howett, C. J. A.; Olkin, C. B.; Reuter, D. C.; Buratti, B. J.; Buie, M. W.; Grundy, W. M.; Parker, A. H.; Zangari, A. M.; Binzel, R. P.; Cook, J. C.; Cruikshank, D. P.; Dalle Ore, C. M.; Earle, A. M.; Jennings, D. E.; Linscott, I. R.; Parker, J. Wm.; Protopapa, S.; Singer, K. N.; Spencer, J. R.; Stern, S. A.; Tsang, C. C. C.; Verbiscer, A. J.; Weaver, H. A.; Young, L. A.
2015-11-01
On approach to the Pluto system, New Horizons’ Ralph Instrument’s Multicolor Visible Imaging Camera (MVIC) observed Pluto and Charon, spatially separated, between April 9 and June 23, 2015. In this period, Pluto and Charon were observed to transition from unresolved objects to resolved and their integrated disk intensities were measured in four MVIC filters: blue (400-550 nm), red (540-700 nm), near-infrared (780-975 nm), and methane (860-910 nm). The measurement suite sampled the bodies over all longitudes. We will present the color rotational light curves for Pluto and Charon and compare them to previous (Buie, M. et al. 2010 AJ 139, 1117; Buratti, B.J. et al 2015 ApJ 804, L6) and concurrent ground-based BVR monitoring. We will also compare these data to color images of the encounter hemisphere taken during New Horizons’ July 14, 2015 Pluto and Charon flyby, as this data set provides a unique bridge between Pluto & Charon as viewed as astronomical targets versus the complex worlds that early data from New Horizons has revealed them to be. This work was supported by NASA’s New Horizons project.
Naegelen, Isabelle; Beaume, Nicolas; Plançon, Sébastien; Schenten, Véronique; Tschirhart, Eric J.; Bréchard, Sabrina
2015-01-01
Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i) define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii) investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes. PMID:26579547
Johann, C; Garidel, P; Mennicke, L; Blume, A
1996-01-01
A simulation program using least-squares minimization was developed to calculate and fit heat capacity (cp) curves to experimental thermograms of dilute aqueous dispersions of phospholipid mixtures determined by high-sensitivity differential scanning calorimetry. We analyzed cp curves and phase diagrams of the pseudobinary aqueous lipid systems 1,2-dimyristoyl-sn-glycero-3-phosphatidylglycerol/ 1,2-dipalmitoyl-sn-glycero-3phosphatidylcholine (DMPG/DPPC) and 1,2-dimyristoyl-sn-glycero-3-phosphatidic acid/1, 2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DMPA/DPPC) at pH 7. The simulation of the cp curves is based on regular solution theory using two nonideality parameters rho g and rho l for symmetric nonideal mixing in the gel and the liquid-crystalline phases. The broadening of the cp curves owing to limited cooperativity is incorporated into the simulation by convolution of the cp curves calculated for infinite cooperativity with a broadening function derived from a simple two-state transition model with the cooperative unit size n = delta HVH/delta Hcal as an adjustable parameter. The nonideality parameters and the cooperative unit size turn out to be functions of composition. In a second step, phase diagrams were calculated and fitted to the experimental data by use of regular solution theory with four different model assumptions. The best fits were obtained with a four-parameter model based on nonsymmetric, nonideal mixing in both phases. The simulations of the phase diagrams show that the absolute values of the nonideality parameters can be changed in a certain range without large effects on the shape of the phase diagram as long as the difference of the nonideality parameters for rho g for the gel and rho l for the liquid-crystalline phase remains constant. The miscibility in DMPG/DPPC and DMPA/DPPC mixtures differs remarkably because, for DMPG/DPPC, delta rho = rho l -rho g is negative, whereas for DMPA/DPPC this difference is positive. For DMPA/DPPC, this
ERIC Educational Resources Information Center
Pargament, Kenneth I.; Sweeney, Patrick J.
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…
Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test
ERIC Educational Resources Information Center
Garn, Alex; Sun, Haichun
2009-01-01
The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…
NASA Astrophysics Data System (ADS)
Harrington, Seán T.; Harrington, Joseph R.
2013-03-01
This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on
Chylla, R A; Volkman, B F; Markley, J L
1998-08-01
A maximum likelihood (ML)-based approach has been established for the direct extraction of NMR parameters (e.g., frequency, amplitude, phase, and decay rate) simultaneously from all dimensions of a D-dimensional NMR spectrum. The approach, referred to here as HTFD-ML (hybrid time frequency domain maximum likelihood), constructs a time-domain model composed of a sum of exponentially-decaying sinusoidal signals. The apodized Fourier transform of this time-domain signal is a model spectrum that represents the 'best-fit' to the equivalent frequency-domain data spectrum. The desired amplitude and frequency parameters can be extracted directly from the signal model constructed by the HTFD-ML algorithm. The HTFD-ML approach presented here, as embodied in the software package CHIFIT, is designed to meet the challenges posed by model fitting of D-dimensional NMR data sets, where each consists of many data points (10(8) is not uncommon) encoding information about numerous signals (up to 10(5) for a protein of moderate size) that exhibit spectral overlap. The suitability of the approach is demonstrated by its application to the concerted analysis of a series of ten 2D 1H-15N HSQC experiments measuring 15N T1 relaxation. In addition to demonstrating the practicality of performing maximum likelihood analysis on large, multidimensional NMR spectra, the results demonstrate that this parametric model-fitting approach provides more accurate amplitude and frequency estimates than those obtained from conventional peak-based analysis of the FT spectrum. The improved performance of the model fitting approach derives from its ability to take into account the simultaneous contributions of all signals in a crowded spectral region (deconvolution) as well as to incorporate prior knowledge in constructing models to fit the data. PMID:9751999
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
Han, Yunkun; Han, Zhanwen E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
Blocker, Alexander W.; Protopapas, Pavlos; Alcock, Charles R.
2009-08-20
We present a new approach to the analysis of time symmetry in light curves, such as those in the X-ray at the center of the Scorpius X-1 occultation debate. Our method uses a new parameterization for such events (the bilogistic event profile) and provides a clear, physically relevant characterization of each event's key features. We also demonstrate a Markov chain Monte Carlo algorithm to carry out this analysis, including a novel independence chain configuration for the estimation of each event's location in the light curve. These tools are applied to the Scorpius X-1 light curves presented in Chang et al., providing additional evidence based on the time series that the events detected thus far are most likely not occultations by trans-Neptunian objects.
A Person-Centered Approach to P-E Fit Questions Using a Multiple-Trait Model.
ERIC Educational Resources Information Center
De Fruyt, Filip
2002-01-01
Employed college students (n=401) completed the Self-Directed Search and NEO Personality Inventory-Revised. Person-environment fit across Holland's six personality types predicted job satisfaction and skill development. Five-Factor Model traits significantly predicted intrinsic career outcomes. Use of the five-factor, person-centered approach to…
Technological growth curves. A competition of forecasting models
Young, P.
1993-12-01
In order to determine procedures for appropriate model selection of technological growth curves, numerous time series that were representative of growth behavior were collected and categorized according to data characteristics. Nine different growth curve models were each fitted onto the various data sets in an attempt to determine which growth curve models achieved the best forecasts for differing types of growth data. The analysis of the results gives rise to a new approach for selecting appropriate growth curve models for a given set of data, prior to fitting the models, based on the characteristics of the data sets. 58 refs., 9 tabs.
R-Curve Approach to Describe the Fracture Resistance of Tool Steels
NASA Astrophysics Data System (ADS)
Picas, Ingrid; Casellas, Daniel; Llanes, Luis
2016-04-01
This work addresses the events involved in the fracture of tool steels, aiming to understand the effect of primary carbides, inclusions, and the metallic matrix on their effective fracture toughness and strength. Microstructurally different steels were investigated. It is found that cracks nucleate on carbides or inclusions at stress values lower than the fracture resistance. It is experimentally evidenced that such cracks exhibit an increasing growth resistance as they progressively extend, i.e., R-curve behavior. Ingot cast steels present a rising R-curve, which implies that the effective toughness developed by small cracks is lower than that determined with long artificial cracks. On the other hand, cracks grow steadily in the powder metallurgy tool steel, yielding as a result a flat R-curve. Accordingly, effective toughness for this material is mostly independent of the crack size. Thus, differences in fracture toughness values measured using short and long cracks must be considered when assessing fracture resistance of tool steels, especially when tool performance is controlled by short cracks. Hence, material selection for tools or development of new steel grades should take into consideration R-curve concepts, in order to avoid unexpected tool failures or to optimize microstructural design of tool steels, respectively.
R-Curve Approach to Describe the Fracture Resistance of Tool Steels
NASA Astrophysics Data System (ADS)
Picas, Ingrid; Casellas, Daniel; Llanes, Luis
2016-06-01
This work addresses the events involved in the fracture of tool steels, aiming to understand the effect of primary carbides, inclusions, and the metallic matrix on their effective fracture toughness and strength. Microstructurally different steels were investigated. It is found that cracks nucleate on carbides or inclusions at stress values lower than the fracture resistance. It is experimentally evidenced that such cracks exhibit an increasing growth resistance as they progressively extend, i.e., R-curve behavior. Ingot cast steels present a rising R-curve, which implies that the effective toughness developed by small cracks is lower than that determined with long artificial cracks. On the other hand, cracks grow steadily in the powder metallurgy tool steel, yielding as a result a flat R-curve. Accordingly, effective toughness for this material is mostly independent of the crack size. Thus, differences in fracture toughness values measured using short and long cracks must be considered when assessing fracture resistance of tool steels, especially when tool performance is controlled by short cracks. Hence, material selection for tools or development of new steel grades should take into consideration R-curve concepts, in order to avoid unexpected tool failures or to optimize microstructural design of tool steels, respectively.
Fabijańska, Anna
2016-06-01
This paper considers the problem of an automatic quantification of DCE-MRI curve shape patterns. In particular, the semi-quantitative approach which classifies DCE time-intensity curves into clusters representing the tree main shape patterns is proposed. The approach combines heuristic rules with the naive Bayes classifier. In particular, the descriptive parameters are firstly derived from pixel-by-pixel analysis of the DCE time intensity curves and then used to recognise the curves which without a doubt represent the three main shape patterns. These curves are next used to train the naive Bayes classifier intended to classify the remaining curves within the dataset. Results of applying the proposed approach to the DCE-MRI scans of patients with prostate cancer are presented and discussed. Additionally, the overall performance of the approach is estimated through the comparison with the ground truth results provided by the expert. PMID:27107675
A European approach to categorizing medicines for fitness to drive: outcomes of the DRUID project
Ravera, Silvia; Monteiro, Susana P; de Gier, Johan Jacob; van der Linden, Trudy; Gómez-Talegón, Trinidad; Álvarez, F Javier
2012-01-01
AIMS To illustrate (i) the criteria and the development of the DRUID categorization system, (ii) the number of medicines that have currently been categorized, (iii) the added value of the DRUID categorization system and (iv) the next steps in the implementation of the DRUID system. METHODS The development of the DRUID categorization system was based on several criteria. The following steps were considered: (i) conditions of use of the medicine, (ii) pharmacodynamic and pharmacokinetic data, (iii) pharmacovigilance data, including prevalence of undesirable effects, (iv) experimental and epidemiological data, (v) additional data derived from the patient information leaflet, existing categorization systems and (vi) final categorization. DRUID proposed four tiered categories for medicines and driving. RESULTS In total, 3054 medicines were reviewed and over 1541 medicines were categorized (the rest were no longer on the EU market). Nearly half of the 1541 medicines were categorized 0 (no or negligible influence on fitness to drive), about 26% were placed in category I (minor influence on fitness to drive) and 17% were categorized as II or III (moderate or severe influence on fitness to drive). CONCLUSIONS The current DRUID categorization system established and defined standardized and harmonized criteria to categorize commonly used medications, based on their influence on fitness to drive. Further efforts are needed to implement the DRUID categorization system at a European level and further activities should be undertaken in order to reinforce the awareness of health care professionals and patients on the effects of medicines on fitness to drive. PMID:22452358
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-04-10
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority ({approx}90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach
Hawkley, Louise C.; Capitanio, John P.
2015-01-01
Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship—as indexed by perceived social isolation (i.e. loneliness)—with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary–adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a ‘lonely’ phenotype in social species, and its consequences for health and fitness. PMID:25870400
NASA Astrophysics Data System (ADS)
Gianneo, A.; Carboni, M.; Giglio, M.
2016-02-01
In view of an extensive literature about guided waves propagation, interaction and numerical simulation in plate-like structures made of metallic and composite materials, a lack of information is pointed out regarding their reliability in structural health monitoring approaches. Typically, because of uncertainties in the inspection process, the capability of non-destructive testing systems is expressed by means of suitable probability of detection curves. Based on Berens' model, a linear relationship is established between probability of detection and flaw size. Although the uncertain factors differ from a non-destructive inspection technique and a structural health monitoring approach, the same mathematical framework can be assumed. Hence, the authors investigated the application of a recently developed non-destructive testing Multi-Parameter POD approach to a guided waves based SHM one: numerical simulations as well as experimental data from flawed plates were combined to bring about a "master" POD curve. Once established, it can be used to build the POD curves of the single key factors as flaw size, orientation, structural attenuation and so on.
Curve aligning approach for gait authentication based on a wearable accelerometer.
Sun, Hu; Yuao, Tao
2012-06-01
Gait authentication based on a wearable accelerometer is a novel biometric which can be used for identity identification, medical rehabilitation and early detection of neurological disorders. The method for matching gait patterns tells heavily on authentication performances. In this paper, curve aligning is introduced as a new method for matching gait patterns and it is compared with correlation and dynamic time warping (DTW). A support vector machine (SVM) is proposed to fuse pattern-matching methods in a decision level. Accelerations collected from ankles of 22 walking subjects are processed for authentications in our experiments. The fusion of curve aligning with backward-forward accelerations and DTW with vertical accelerations promotes authentication performances substantially and consistently. This fusion algorithm is tested repeatedly. Its mean and standard deviation of equal error rates are 0.794% and 0.696%, respectively, whereas among all presented non-fusion algorithms, the best one shows an EER of 3.03%. PMID:22621972
3D Modeling of Spectra and Light Curves of Hot Jupiters with PHOENIX; a First Approach
NASA Astrophysics Data System (ADS)
Jiménez-Torres, J. J.
2016-04-01
A detailed global circulation model was used to feed the PHOENIX code and calculate 3D spectra and light curves of hot Jupiters. Cloud free and dusty radiative fluxes for the planet HD179949b were modeled to show differences between them. The PHOENIX simulations can explain the broad features of the observed 8 μm light curves, including the fact that the planet-star flux ratio peaks before the secondary eclipse. The PHOENIX reflection spectrum matches the Spitzer secondary-eclipse depth at 3.6 μm and underpredicts eclipse depths at 4.5, 5.8 and 8.0 μm. These discrepancies result from the chemical composition and suggest the incorporation of different metallicities in future studies.
Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José
2016-10-01
In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns. PMID:26890706
NASA Astrophysics Data System (ADS)
Gaulme, P.; Appourchaux, T.; Boumier, P.
2009-10-01
Aims: We investigate the asteroseismology of two solar-like targets as observed with the CoRoT satellite, with particular attention paid to the mode fitting. HD 181420 and HD 49933 are typical CoRoT solar-like targets (156 and 60-day runs). The low signal-to-noise ratio (SNR) of about 3{-}10 prevents us from unambiguously identifying the individual oscillation modes. In particular, convergence problems appear at the edges of the oscillation spectrum. Methods: We apply a Bayesian approach to the analysis of these data. We compare the global fitting of the power spectra obtained by the classical maximum likelihood (MLE) and the maximum a posteriori (MAP) estimators. Results: We examine the impact of the choice of the priors upon the fitted parameters. We also propose to reduce the number of free parameters in the fitting, by replacing the individual estimate of mode height associated with each overtone by a continuous function of frequency (Gaussian profile). Conclusions: The MAP appears as a powerful tool to constrain the global fits, but it must be used carefully and only with reliable priors. The mode width of the stars increases with the frequency over all the oscillation spectrum. The CoRoT space mission, launched on 2006 December 27, was developed and is operated by the CNES, with participation of the Science Programs of ESA, ESA's RSSD, Austria, Belgium, Brazil, Germany and Spain.
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-06-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The aim of this study was to perform a visual analysis of this relationship with self-organizing maps and to monitor the subject's evolution during the 4 years of secondary school. Four hundred and forty-four students participated in the study. The physical activity and physical fitness of the participants were measured, and the participants' grade point averages were obtained from the five participant institutions. Four main clusters representing two primary student profiles with few differences between boys and girls were observed. The clustering demonstrated that students with higher energy expenditure and better physical fitness exhibited lower body mass index (BMI) and higher academic performance, whereas those adolescents with lower energy expenditure exhibited worse physical fitness, higher BMI and lower academic performance. With respect to the evolution of the students during the 4 years, ∼25% of the students originally clustered in a negative profile moved to a positive profile, and there was no movement in the opposite direction. PMID:25953972
Computerized detection of retina blood vessel using a piecewise line fitting approach
NASA Astrophysics Data System (ADS)
Gu, Suicheng; Zhen, Yi; Wang, Ningli; Pu, Jiantao
2013-03-01
Retina vessels are important landmarks in fundus images, an accurate segmentation of the vessels may be useful for automated screening for several eye diseases or systematic diseases, such as diebetes. A new method is presented for automated segmentation of blood vessels in two-dimensional color fundus images. First, a coherence filter and a followed mean filter are applied to the green channel of the image. The green channel is selected because the vessels have the maximal contrast at the green channel. The coherence filter is to enhance the line strength of the original image and the mean filter is to discard the intensity variance among different regions. Since the vessels are darker than the around tissues depicted on the image, the pixels with small intensity are then retained as points of interest (POI). A new line fitting algorithm is proposed to identify line-like structures in each local circle of the POI. The proposed line fitting method is less sensitive to noise compared to the least squared fitting. The fitted lines with higher scores are regarded as vessels. To evaluate the performance of the proposed method, a public available database DRIVE with 20 test images is selected for experiments. The mean accuracy on these images is 95.7% which is comparable to the state-of-art.
Hydrothermal germination models: comparison of two data-fitting approaches with probit optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
Probit models for estimating hydrothermal germination rate yield model parameters that have been associated with specific physiological processes. The desirability of linking germination response to seed physiology must be weighed against expectations of model fit and the relative accuracy of predi...
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
Lu, Yehu; Song, Guowen; Li, Jun
2014-11-01
The garment fit played an important role in protective performance, comfort and mobility. The purpose of this study is to quantify the air gap to quantitatively characterize a three-dimensional (3-D) garment fit using a 3-D body scanning technique. A method for processing of scanned data was developed to investigate the air gap size and distribution between the clothing and human body. The mesh model formed from nude and clothed body was aligned, superimposed and sectioned using Rapidform software. The air gap size and distribution over the body surface were analyzed. The total air volume was also calculated. The effects of fabric properties and garment size on air gap distribution were explored. The results indicated that average air gap of the fit clothing was around 25-30 mm and the overall air gap distribution was similar. The air gap was unevenly distributed over the body and it was strongly associated with the body parts, fabric properties and garment size. The research will help understand the overall clothing fit and its association with protection, thermal and movement comfort, and provide guidelines for clothing engineers to improve thermal performance and reduce physiological burden. PMID:24793820
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach
ERIC Educational Resources Information Center
Perkmen, Serkan; Cevik, Beste; Alkan, Mahir
2012-01-01
Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…
Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach
ERIC Educational Resources Information Center
Bjerke, Wendy
2013-01-01
The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…
ERIC Educational Resources Information Center
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-01-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…
NASA Astrophysics Data System (ADS)
Zenzerovic, I.; Kropp, W.; Pieringer, A.
2016-08-01
Curve squeal is a strong tonal sound that may arise when a railway vehicle negotiates a tight curve. In contrast to frequency-domain models, time-domain models are able to capture the nonlinear and transient nature of curve squeal. However, these models are computationally expensive due to requirements for fine spatial and time discretization. In this paper, a computationally efficient engineering model for curve squeal in the time-domain is proposed. It is based on a steady-state point-contact model for the tangential wheel/rail contact and a Green's functions approach for wheel and rail dynamics. The squeal model also includes a simple model of sound radiation from the railway wheel from the literature. A validation of the tangential point-contact model against Kalker's transient variational contact model reveals that the point-contact model performs well within the squeal model up to at least 5 kHz. The proposed squeal model is applied to investigate the influence of lateral creepage, friction and wheel/rail contact position on squeal occurrence and amplitude. The study indicates a significant influence of the wheel/rail contact position on squeal frequencies and amplitudes. Friction and lateral creepage show an influence on squeal occurrence and amplitudes, but this is only secondary to the influence of the contact position.