Sample records for curve fitting approach

  1. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  2. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  3. Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-02-01

    The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply

  4. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  5. Fitting milk production curves through nonlinear mixed models.

    PubMed

    Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica

    2017-05-01

    The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.

  6. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  7. Curve fitting air sample filter decay curves to estimate transuranic content.

    PubMed

    Hayes, Robert B; Chiou, Hung Cheng

    2004-01-01

    By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.

  8. Fitting Richards' curve to data of diverse origins

    USGS Publications Warehouse

    Johnson, D.H.; Sargeant, A.B.; Allen, S.H.

    1975-01-01

    Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.

  9. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  10. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  11. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1986-01-01

    New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).

  12. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  13. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  14. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  16. Focusing of light through turbid media by curve fitting optimization

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi

    2016-12-01

    The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.

  17. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  18. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  19. A curve fitting method for solving the flutter equation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cooper, J. L.

    1972-01-01

    A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.

  20. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  1. Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    2016-12-08

    In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less

  2. Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS

    NASA Astrophysics Data System (ADS)

    de Blok, W. J. G.; McGaugh, S. S.

    1998-11-01

    We present modified Newtonian dynamics (MOND) fits to 15 rotation curves of low surface brightness (LSB) galaxies. Good fits are readily found, although for a few galaxies minor adjustments to the inclination are needed. Reasonable values for the stellar mass-to-light ratios are found, as well as an approximately constant value for the total (gas and stars) mass-to-light ratio. We show that the LSB galaxies investigated here lie on the one, unique Tully-Fisher relation, as predicted by MOND. The scatter on the Tully-Fisher relation can be completely explained by the observed scatter in the total mass-to-light ratio. We address the question of whether MOND can fit any arbitrary rotation curve by constructing a plausible fake model galaxy. While MOND is unable to fit this hypothetical galaxy, a normal dark-halo fit is readily found, showing that dark matter fits are much less selective in producing fits. The good fits to rotation curves of LSB galaxies support MOND, especially because these are galaxies with large mass discrepancies deep in the MOND regime.

  3. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  4. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  5. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  6. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  7. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  8. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    NASA Astrophysics Data System (ADS)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  9. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  10. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  11. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  12. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  13. Reference Curves for Field Tests of Musculoskeletal Fitness in U.S. Children and Adolescents: The 2012 NHANES National Youth Fitness Survey.

    PubMed

    Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C

    2017-08-01

    Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over

  14. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  15. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  16. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe givesmore » a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.« less

  17. An alternative approach to the Army Physical Fitness Test two-mile run using critical velocity and isoperformance curves.

    PubMed

    Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R

    2012-02-01

    The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.

  18. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    NASA Technical Reports Server (NTRS)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  19. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  20. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration

    2014-03-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.

  1. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold

  2. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  3. On the Use of Nonparametric Item Characteristic Curve Estimation Techniques for Checking Parametric Model Fit

    ERIC Educational Resources Information Center

    Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey

    2009-01-01

    The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…

  4. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration

    2013-10-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.

  5. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  6. A Curve Fitting Approach Using ANN for Converting CT Number to Linear Attenuation Coefficient for CT-based PET Attenuation Correction

    NASA Astrophysics Data System (ADS)

    Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng

    2015-02-01

    Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation

  7. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  8. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  9. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sager, D.

    1973-01-01

    Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.

  11. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  12. Advantages of estimating parameters of photosynthesis model by fitting A-Ci curves at multiple subsaturating light intensities

    NASA Astrophysics Data System (ADS)

    Fu, W.; Gu, L.; Hoffman, F. M.

    2013-12-01

    The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance

  13. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  14. On the convexity of ROC curves estimated from radiological test results.

    PubMed

    Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S

    2010-08-01

    Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.

  15. Curve fits of predicted inviscid stagnation-point radiative heating rates, cooling factors, and shock standoff distances for hyperbolic earth entry

    NASA Technical Reports Server (NTRS)

    Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.

    1974-01-01

    Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.

  16. On the convexity of ROC curves estimated from radiological test results

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.

    2010-01-01

    Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155

  17. Using a constrained formulation based on probability summation to fit receiver operating characteristic (ROC) curves

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David

    2000-04-01

    A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.

  18. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    NASA Astrophysics Data System (ADS)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  19. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  20. An Approach of Estimating Individual Growth Curves for Young Thoroughbred Horses Based on Their Birthdays

    PubMed Central

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2014-01-01

    ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356

  1. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  2. Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.

    PubMed Central

    Franco, R; Aran, J M; Canela, E I

    1991-01-01

    A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914

  3. Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.

    1981-01-01

    The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.

  4. Significantly Reduced Blood Pressure Measurement Variability for Both Normotensive and Hypertensive Subjects: Effect of Polynomial Curve Fitting of Oscillometric Pulses

    PubMed Central

    Zhu, Mingping; Chen, Aiqing

    2017-01-01

    This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580

  5. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus

  6. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus

  7. Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods

    NASA Astrophysics Data System (ADS)

    Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.

    2018-06-01

    We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.

  8. Modeling and Maximum Likelihood Fitting of Gamma-Ray and Radio Light Curves of Millisecond Pulsars Detected with Fermi

    NASA Technical Reports Server (NTRS)

    Johnson, T. J.; Harding, A. K.; Venter, C.

    2012-01-01

    Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.

  9. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  10. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  11. A Survey of Xenon Ion Sputter Yield Data and Fits Relevant to Electric Propulsion Spacecraft Integration

    NASA Technical Reports Server (NTRS)

    Yim, John T.

    2017-01-01

    A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.

  12. An interactive graphics program to retrieve, display, compare, manipulate, curve fit, difference and cross plot wind tunnel data

    NASA Technical Reports Server (NTRS)

    Elliott, R. D.; Werner, N. M.; Baker, W. M.

    1975-01-01

    The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.

  13. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  14. The Application of Curve Fitting on the Voltammograms of Various Isoforms of Metallothioneins–Metal Complexes

    PubMed Central

    Merlos Rodrigo, Miguel Angel; Molina-López, Jorge; Jimenez Jimenez, Ana Maria; Planells Del Pozo, Elena; Adam, Pavlina; Eckschlager, Tomas; Zitka, Ondrej; Richtera, Lukas; Adam, Vojtech

    2017-01-01

    The translation of metallothioneins (MTs) is one of the defense strategies by which organisms protect themselves from metal-induced toxicity. MTs belong to a family of proteins comprising MT-1, MT-2, MT-3, and MT-4 classes, with multiple isoforms within each class. The main aim of this study was to determine the behavior of MT in dependence on various externally modelled environments, using electrochemistry. In our study, the mass distribution of MTs was characterized using MALDI-TOF. After that, adsorptive transfer stripping technique with differential pulse voltammetry was selected for optimization of electrochemical detection of MTs with regard to accumulation time and pH effects. Our results show that utilization of 0.5 M NaCl, pH 6.4, as the supporting electrolyte provides a highly complicated fingerprint, showing a number of non-resolved voltammograms. Hence, we further resolved the voltammograms exhibiting the broad and overlapping signals using curve fitting. The separated signals were assigned to the electrochemical responses of several MT complexes with zinc(II), cadmium(II), and copper(II), respectively. Our results show that electrochemistry could serve as a great tool for metalloproteomic applications to determine the ratio of metal ion bonds within the target protein structure, however, it provides highly complicated signals, which require further resolution using a proper statistical method, such as curve fitting. PMID:28287470

  15. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

  16. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  17. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    ERIC Educational Resources Information Center

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  18. Longitudinal Models of Reliability and Validity: A Latent Curve Approach.

    ERIC Educational Resources Information Center

    Tisak, John; Tisak, Marie S.

    1996-01-01

    Dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis, are discussed. A latent curve model formulated to depict change is incorporated into the classical definitions of reliability and validity. The approach is illustrated with sociological and psychological…

  19. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  20. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    PubMed Central

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  1. Assessment of Person Fit Using Resampling-Based Approaches

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…

  2. Edge detection and mathematic fitting for corneal surface with Matlab software.

    PubMed

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  3. Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach

    PubMed Central

    Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy

    2014-01-01

    We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901

  4. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    NASA Astrophysics Data System (ADS)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  5. Turbine blade profile design method based on Bezier curves

    NASA Astrophysics Data System (ADS)

    Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.

    2017-11-01

    In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.

  6. Improvements in Spectrum's fit to program data tool.

    PubMed

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  7. Comparison of the A-Cc curve fitting methods in determining maximum ribulose 1.5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration.

    PubMed

    Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei

    2009-02-01

    A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.

  8. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  9. FIT-MART: Quantum Magnetism with a Gentle Learning Curve

    NASA Astrophysics Data System (ADS)

    Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.

    We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.

  10. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions.

    PubMed

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-10

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  11. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions

    NASA Astrophysics Data System (ADS)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-01

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  12. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  13. Testing goodness of fit in regression: a general approach for specified alternatives.

    PubMed

    Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J

    2012-12-10

    When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.

  14. GERMINATOR: a software package for high-throughput scoring and curve fitting of Arabidopsis seed germination.

    PubMed

    Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M

    2010-04-01

    Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.

  15. Computational tools for fitting the Hill equation to dose-response curves.

    PubMed

    Gadagkar, Sudhindra R; Call, Gerald B

    2015-01-01

    Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has

  16. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  17. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    ERIC Educational Resources Information Center

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  18. Random Initialisation of the Spectral Variables: an Alternate Approach for Initiating Multivariate Curve Resolution Alternating Least Square (MCR-ALS) Analysis.

    PubMed

    Kumar, Keshav

    2017-11-01

    Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.

  19. Radio emission of SN1993J: the complete picture. II. Simultaneous fit of expansion and radio light curves

    NASA Astrophysics Data System (ADS)

    Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.

    2011-02-01

    We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.

  20. Segmentation of neuronal structures using SARSA (λ)-based boundary amendment with reinforced gradient-descent curve shape fitting.

    PubMed

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging

  1. Segmentation of Neuronal Structures Using SARSA (λ)-Based Boundary Amendment with Reinforced Gradient-Descent Curve Shape Fitting

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging

  2. Visualization and curve-parameter estimation strategies for efficient exploration of phenotype microarray kinetics.

    PubMed

    Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM

  3. Visualization and Curve-Parameter Estimation Strategies for Efficient Exploration of Phenotype Microarray Kinetics

    PubMed Central

    Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely

  4. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  5. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution

  6. Compression of contour data through exploiting curve-to-curve dependence

    NASA Technical Reports Server (NTRS)

    Yalabik, N.; Cooper, D. B.

    1975-01-01

    An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.

  7. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    NASA Astrophysics Data System (ADS)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  8. Structural modeling of age specific fertility curves in Peninsular Malaysia: An approach of Lee Carter method

    NASA Astrophysics Data System (ADS)

    Hanafiah, Hazlenah; Jemain, Abdul Aziz

    2013-11-01

    In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.

  9. Goodness of Fit: A Relational Approach to Field Instruction

    ERIC Educational Resources Information Center

    Ornstein, Eric D.; Moses, Helene

    2010-01-01

    This article uses the metaphor of "goodness of fit" to highlight the core features of a relational approach to field instruction. The distinctive attributes of this approach are contrasted with a traditional model of field instruction. The "teach or treat" dilemma is discussed to demonstrate the necessity for field instructors to maintain the…

  10. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  11. Cyclic Voltammetry Probe Approach Curves with Alkali Amalgams at Mercury Sphere-Cap Scanning Electrochemical Microscopy Probes.

    PubMed

    Barton, Zachary J; Rodríguez-López, Joaquín

    2017-03-07

    We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.

  12. Microlensing results toward the galactic bulge, theory of fitting blended light curves, and discussion of weak lensing corrections

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.

    2006-06-01

    Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution

  13. A global goodness-of-fit test for receiver operating characteristic curve analysis via the bootstrap method.

    PubMed

    Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila

    2005-10-01

    Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.

  14. Probing exoplanet clouds with optical phase curves.

    PubMed

    Muñoz, Antonio García; Isaak, Kate G

    2015-11-03

    Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5.

  15. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  16. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  17. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M., E-mail: nsanders@cfa.harvard.edu

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. Wemore » present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.« less

  18. Assessment of Shape Changes of Mistletoe Berries: A New Software Approach to Automatize the Parameterization of Path Curve Shaped Contours

    PubMed Central

    Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan

    2013-01-01

    Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255

  19. Probing exoplanet clouds with optical phase curves

    PubMed Central

    Muñoz, Antonio García; Isaak, Kate G.

    2015-01-01

    Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5. PMID:26489652

  20. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  1. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  2. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    PubMed

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  3. Multiple curved descending approaches and the air traffic control problem

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Mcpherson, D.; Kreifeldt, J.; Wemple, T. E.

    1977-01-01

    A terminal area air traffic control simulation was designed to study ways of accommodating increased air traffic density. The concepts that were investigated assumed the availability of the microwave landing system and data link and included: (1) multiple curved descending final approaches; (2) parallel runways certified for independent and simultaneous operation under IFR conditions; (3) closer spacing between successive aircraft; and (4) a distributed management system between the air and ground. Three groups each consisting of three pilots and two air traffic controllers flew a combined total of 350 approaches. Piloted simulators were supplied with computer generated traffic situation displays and flight instruments. The controllers were supplied with a terminal area map and digital status information. Pilots and controllers also reported that the distributed management procedure was somewhat more safe and orderly than the centralized management procedure. Flying precision increased as the amount of turn required to intersect the outer mark decreased. Pilots reported that they preferred the alternative of multiple curved descending approaches with wider spacing between aircraft to closer spacing on single, straight in finals while controllers preferred the latter option. Both pilots and controllers felt that parallel runways are an acceptable way to accommodate increased traffic density safely and expeditiously.

  4. Surface fitting three-dimensional bodies

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1974-01-01

    The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.

  5. Nongaussian distribution curve of heterophorias among children.

    PubMed

    Letourneau, J E; Giroux, R

    1991-02-01

    The purpose of this study was to measure the distribution curve of horizontal and vertical phorias among children. Kolmogorov-Smirnov goodness of fit tests showed that these distribution curves were not Gaussian among (N = 2048) 6- to 13-year-old children. The distribution curve of horizontal phoria at far and of vertical phorias at far and at near were leptokurtic; the distribution curve of horizontal phoria at near was platykurtic. No variation of the distribution curve of heterophorias with age was observed. Comparisons of any individual findings with the general distribution curve should take the nonGaussian distribution curve of heterophorias into account.

  6. Comparing Angular and Curved Shapes in Terms of Implicit Associations and Approach/Avoidance Responses.

    PubMed

    Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco

    2015-01-01

    Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference.

  7. Comparing Angular and Curved Shapes in Terms of Implicit Associations and Approach/Avoidance Responses

    PubMed Central

    Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco

    2015-01-01

    Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610

  8. Forgetting Curves: Implications for Connectionist Models

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2002-01-01

    Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…

  9. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  10. Manual flying of curved precision approaches to landing with electromechanical instrumentation. A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Knox, Charles E.

    1993-01-01

    A piloted simulation study was conducted to examine the requirements for using electromechanical flight instrumentation to provide situation information and flight guidance for manually controlled flight along curved precision approach paths to a landing. Six pilots were used as test subjects. The data from these tests indicated that flight director guidance is required for the manually controlled flight of a jet transport airplane on curved approach paths. Acceptable path tracking performance was attained with each of the three situation information algorithms tested. Approach paths with both multiple sequential turns and short final path segments were evaluated. Pilot comments indicated that all the approach paths tested could be used in normal airline operations.

  11. Risk factors for unsuccessful acetabular press-fit fixation at primary total hip arthroplasty.

    PubMed

    Brulc, U; Antolič, V; Mavčič, B

    2017-11-01

    Surgeon at primary total hip arthroplasty sometimes cannot achieve sufficient cementless acetabular press-fit fixation and must resort to other fixation methods. Despite a predominant use of cementless cups, this issue is not fully clarified, therefore we performed a large retrospective study to: (1) identify risk factors related to patient or implant or surgeon for unsuccessful intraoperative press-fit; (2) check for correlation between surgeons' volume of operated cases and the press-fit success rate. Unsuccessful intra-operative press-fit more often occurs in older female patients, particular implants, due to learning curve and low-volume surgeons. Retrospective observational cohort of prospectively collected intraoperative data (2009-2016) included all primary total hip arthroplasty patients with implant brands that offered acetabular press-fit fixation only. Press-fit was considered successful if acetabulum was of the same implant brand as the femoral component without additional screws or cement. Logistic regression models for unsuccessful acetabular press-fit included patients' gender/age/operated side, implant, surgeon, approach (posterior n=1206, direct-lateral n=871) and surgery date (i.e. learning curve). In 2077 patients (mean 65.5 years, 1093 females, 1163 right hips), three different implant brands (973 ABG-II™-Stryker, 646 EcoFit™ Implantcast, 458 Procotyl™ L-Wright) were implanted by eight surgeons. Their unsuccessful press-fit fixation rates ranged from 3.5% to 23.7%. Older age (odds ratio 1.01 [95% CI: 0.99-1.02]), female gender (2.87 [95% CI: 2.11-3.91]), right side (1.44 [95% CI: 1.08-1.92]), surgery date (0.90 [95% CI: 1.08-1.92]) and particular implants were significant risk factors only in three surgeons with less successful surgical technique (higher rates of unsuccessful press-fit with Procotyl™-L and EcoFit™ [P=0.01]). Direct-lateral hip approach had a lower rate of unsuccessful press-fit than posterior hip approach (P<0.01), but

  12. Fitting Photometry of Blended Microlensing Events

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.; Griest, Kim

    2006-03-01

    We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.

  13. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  14. An approach to bioassessment of water quality using diversity measures based on species accumulative curves.

    PubMed

    Xu, Guangjian; Zhang, Wei; Xu, Henglong

    2015-02-15

    Traditional community-based bioassessment is time-consuming because they rely on full species-abundance data of a community. To improve bioassessment efficiency, the feasibility of the diversity measures based on species accumulative curves for bioassessment of water quality status was studied based on a dataset of microperiphyton fauna. The results showed that: (1) the species accumulative curves well fitted the Michaelis-Menten equation; (2) the β- and γ-diversity, as well as the number of samples to 50% of the maximum species number (Michaelis-Menten constant K), can be statistically estimated based on the formulation; (3) the rarefied α-diversity represented a significant negative correlation with the changes in the nutrient NH4-N; and (4) the estimated β-diversity and the K constant were significantly positively related to the concentration of NH4-N. The results suggest that the diversity measures based on species accumulative curves might be used as a potential bioindicator of water quality in marine ecosystems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  16. B-737 flight test of curved-path and steep-angle approaches using MLS guidance

    NASA Technical Reports Server (NTRS)

    Branstetter, J. R.; White, W. F.

    1989-01-01

    A series of flight tests were conducted to collect data for jet transport aircraft flying curved-path and steep-angle approaches using Microwave Landing System (MLS) guidance. During the test, 432 approaches comprising seven different curved-paths and four glidepath angles varying from 3 to 4 degrees were flown in NASA Langley's Boeing 737 aircraft (Transport Systems Research Vehicle) using an MLS ground station at the NASA Wallops Flight Facility. Subject pilots from Piedmont Airlines flew the approaches using conventional cockpit instrumentation (flight director and Horizontal Situation Indicator (HSI). The data collected will be used by FAA procedures specialists to develop standards and criteria for designing MLS terminal approach procedures (TERPS). The use of flight simulation techniques greatly aided the preliminary stages of approach development work and saved a significant amount of costly flight time. This report is intended to complement a data report to be issued by the FAA Office of Aviation Standards which will contain all detailed data analysis and statistics.

  17. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  18. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  19. Serial position curves in free recall.

    PubMed

    Laming, Donald

    2010-01-01

    The scenario for free recall set out in Laming (2009) is developed to provide models for the serial position curves from 5 selected sets of data, for final free recall, and for multitrial free recall. The 5 sets of data reflect the effects of rate of presentation, length of list, delay of recall, and suppression of rehearsal. Each model accommodates the serial position curve for first recalls (where those data are available) as well as that for total recalls. Both curves are fit with the same parameter values, as also (with 1 exception) are all of the conditions compared within each experiment. The distributions of numbers of recalls are also examined and shown to have variances increased above what would be expected if successive recalls were independent. This is taken to signify that, in those experiments in which rehearsals were not recorded, the retrieval of words for possible recall follows the same pattern that is observed following overt rehearsal, namely, that retrieval consists of runs of consecutive elements from memory. Finally, 2 sets of data are examined that the present approach cannot accommodate. It is argued that the problem with these data derives from an interaction between the patterns of (covert) rehearsal and the parameters of list presentation.

  20. What is the learning curve for the anterior approach for total hip arthroplasty?

    PubMed

    de Steiger, Richard Noel; Lorimer, Michelle; Solomon, Michael

    2015-12-01

    There are many factors that may affect the learning curve for total hip arthroplasty (THA) and surgical approach is one of these. There has been renewed interest in the direct anterior approach for THA with variable outcomes reported, but few studies have documented a surgeon's individual learning curve when using this approach. (1) What was the revision rate for all surgeons adopting the anterior approach for placement of a particular implant? (2) What was the revision rate for surgeons who performed > 100 cases in this fashion? (3) Is there a minimum number of cases required to complete a learning curve for this procedure? The Australian Orthopaedic Association National Joint Replacement Registry prospectively collects data on all primary and revision joint arthroplasty surgery. We analyzed all conventional THAs performed up to December 31, 2013, with a primary diagnosis of osteoarthritis using a specific implant combination and secondarily those associated with surgeons performing more than 100 procedures. Ninety-five percent of these procedures were performed through the direct anterior approach. Procedures using this combination were ordered from earliest (first procedure date) to latest (last procedure date) for each individual surgeon. Using the order number for each surgeon, five operation groups were defined: one to 15 operations, 16 to 30 operations, 31 to 50 operations, 51 to 100 operations, and > 100 operations. The primary outcome measure was time to first revision using Kaplan-Meier estimates of survivorship. Sixty-eight surgeons performed 5499 THAs using the specified implant combination. The cumulative percent revision at 4 years for all 68 surgeons was 3% (95% confidence interval [CI], 2.5-3.8). For surgeons who had performed over 100 operations, the cumulative revision rate was 3% (95% CI, 2.0-3.5). It was not until surgeons had performed over 50 operations that there was no difference in the cumulative percent revision compared with over 100

  1. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  2. Examining the Earnings Trajectories of Community College Students Using a Piecewise Growth Curve Modeling Approach

    ERIC Educational Resources Information Center

    Jaggars, Shanna Smith; Xu, Di

    2016-01-01

    Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…

  3. Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures

    PubMed Central

    Bayer, Jason D.

    2014-01-01

    We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911

  4. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Aldering, G.; Aragon, C.

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less

  5. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli, E-mail: zhongliliu@yeah.net

    2014-05-15

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801){sup 0.298} (one-phase approach), 1850(1 + P/12.806){sup 0.357} (two-phase approach).more » The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment.« less

  6. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  7. The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.

    PubMed

    van Battum, L J; Huizenga, H

    2006-07-01

    Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.

  8. Craniofacial Reconstruction Using Rational Cubic Ball Curves

    PubMed Central

    Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan

    2015-01-01

    This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632

  9. A new methodology for free wake analysis using curved vortex elements

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.

    1987-01-01

    A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.

  10. Latent Trait Theory Approach to Measuring Person-Organization Fit: Conceptual Rationale and Empirical Evaluation

    ERIC Educational Resources Information Center

    Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex

    2009-01-01

    The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the…

  11. Observational evidence of dust evolution in galactic extinction curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo

    Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less

  12. Modeling two strains of disease via aggregate-level infectivity curves.

    PubMed

    Romanescu, Razvan; Deardon, Rob

    2016-04-01

    Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.

  13. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  14. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    NASA Technical Reports Server (NTRS)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  15. Biological growth functions describe published site index curves for Lake States timber species.

    Treesearch

    Allen L. Lundgren; William A. Dolid

    1970-01-01

    Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...

  16. Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece

    DOEpatents

    Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.

    2008-10-14

    Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.

  17. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  18. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing

  19. Study on peak shape fitting method in radon progeny measurement.

    PubMed

    Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju

    2015-11-01

    Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    PubMed

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Milky Way Kinematics. II. A Uniform Inner Galaxy H I Terminal Velocity Curve

    NASA Astrophysics Data System (ADS)

    McClure-Griffiths, N. M.; Dickey, John M.

    2016-11-01

    Using atomic hydrogen (H I) data from the VLA Galactic Plane Survey, we measure the H I terminal velocity as a function of longitude for the first quadrant of the Milky Way. We use these data, together with our previous work on the fourth Galactic quadrant, to produce a densely sampled, uniformly measured, rotation curve of the northern and southern Milky Way between 3 {kpc}\\lt R\\lt 8 {kpc}. We determine a new joint rotation curve fit for the first and fourth quadrants, which is consistent with the fit we published in McClure-Griffiths & Dickey and can be used for estimating kinematic distances interior to the solar circle. Structure in the rotation curves is now exquisitely well defined, showing significant velocity structure on lengths of ˜200 pc, which is much greater than the spatial resolution of the rotation curve. Furthermore, the shape of the rotation curves for the first and fourth quadrants, even after subtraction of a circular rotation fit shows a surprising degree of correlation with a roughly sinusoidal pattern between 4.2\\lt R\\lt 7 kpc.

  2. A flight investigation with a STOL airplane flying curved, descending instrument approach paths

    NASA Technical Reports Server (NTRS)

    Benner, M. S.; Mclaughlin, M. D.; Sawyer, R. H.; Vangunst, R.; Ryan, J. L.

    1974-01-01

    A flight investigation using a De Havilland Twin Otter airplane was conducted to determine the configurations of curved, 6 deg descending approach paths which would provide minimum airspace usage within the requirements for acceptable commercial STOL airplane operations. Path configurations with turns of 90 deg, 135 deg, and 180 deg were studied; the approach airspeed was 75 knots. The length of the segment prior to turn, the turn radius, and the length of the final approach segment were varied. The relationship of the acceptable path configurations to the proposed microwave landing system azimuth coverage requirements was examined.

  3. [Keratoconus special soft contact lens fitting].

    PubMed

    Yamazaki, Ester Sakae; da Silva, Vanessa Cristina Batista; Morimitsu, Vagner; Sobrinho, Marcelo; Fukushima, Nelson; Lipener, César

    2006-01-01

    To evaluate the fitting and use of a soft contact lens in keratoconic patients. Retrospective study on 80 eyes of 66 patients, fitted with a special soft contact lens for keratoconus, at the Contact Lens Section of UNIFESP and private clinics. Keratoconus was classified according to degrees of disease severity by keratometric pattern. Age, gender, diagnosis, keratometry, visual acuity, spherical equivalent (SE), base curve and clinical indication were recorded. Of 66 patients (80 eyes) with keratoconus the mean age was 29 years, 51.5% were men and 48.5% women. According to the groups: 15.0% were incipient, 53.7% moderate, 26.3% advanced and 5.0% were severe. The majority of the eyes of patients using contact lenses (91.25%) achieved visual acuity better than 20/40. To 88 eyes 58% were tihed with lens with spherical power (mean -5.45 diopters) and 41% with spherocylinder power (from -0.5 to -5.00 cylindrical diopters). The most frequent base curve was 7.6 in 61% of the eyes. The main reasons for this special lens fitting were due to reduced tolerance and poor fitting pattern achieved with other lenses. The special soft contact lens is useful in fitting difficult keratoconic patients by offering comfort and improving visual rehabilitation that may allow more patients to postpone the need for corneal transplant.

  4. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in

  5. A method for evaluating models that use galaxy rotation curves to derive the density profiles

    NASA Astrophysics Data System (ADS)

    de Almeida, Álefe O. F.; Piattella, Oliver F.; Rodrigues, Davi C.

    2016-11-01

    There are some approaches, either based on General Relativity (GR) or modified gravity, that use galaxy rotation curves to derive the matter density of the corresponding galaxy, and this procedure would either indicate a partial or a complete elimination of dark matter in galaxies. Here we review these approaches, clarify the difficulties on this inverted procedure, present a method for evaluating them, and use it to test two specific approaches that are based on GR: the Cooperstock-Tieu (CT) and the Balasin-Grumiller (BG) approaches. Using this new method, we find that neither of the tested approaches can satisfactorily fit the observational data without dark matter. The CT approach results can be significantly improved if some dark matter is considered, while for the BG approach no usual dark matter halo can improve its results.

  6. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  7. Estimation of median growth curves for children up two years old based on biresponse local linear estimator

    NASA Astrophysics Data System (ADS)

    Chamidah, Nur; Rifada, Marisa

    2016-03-01

    There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.

  8. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach

    PubMed Central

    Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-01-01

    Background Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. Objective To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Methods Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Results Analysis of the 5 behavior adoption stages showed that stage 1 (“unengaged”) was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). “Unengaged” nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already “acting” (stage 4) showed a greater preference for a deliberative decision-making style (F4,1012=21.83, P<.001). Furthermore, participants differed widely in their readiness to adopt nutrition

  9. Highly curved image sensors: a practical approach for improved optical performance

    NASA Astrophysics Data System (ADS)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-01

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  10. Methods for Performing Survival Curve Quality-of-Life Assessments.

    PubMed

    Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D

    2014-08-01

    Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.

  11. Stochastic simulation of soil particle-size curves in heterogeneous aquifer systems through a Bayes space approach

    NASA Astrophysics Data System (ADS)

    Menafoglio, A.; Guadagnini, A.; Secchi, P.

    2016-08-01

    We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.

  12. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides

  13. Graphical approach to assess the soil fertility evaluation model validity for rice (case study: southern area of Merapi Mountain, Indonesia)

    NASA Astrophysics Data System (ADS)

    Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo

    2018-03-01

    Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.

  14. Calibrating nadir striped artifacts in a multibeam backscatter image using the equal mean-variance fitting model

    NASA Astrophysics Data System (ADS)

    Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue

    2017-07-01

    Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.

  15. Thermal analysis and FTIR spectral curve-fitting investigation of formation mechanism and stability of indomethacin-saccharin cocrystals via solid-state grinding process.

    PubMed

    Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang

    2012-07-01

    The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Fitting the post-keratoplasty cornea with hydrogel lenses.

    PubMed

    Katsoulos, Costas; Nick, Vasileiou; Lefteris, Karageorgiadis; Theodore, Mousafeiropoulos

    2009-02-01

    We report two cases who have undergone penetrating keratoplasty (three eyes total), and who were fitted with hydrogel lenses. In the first case, a 28-year-old male presented with an interest in contact lens fitting. He had undergone corneal transplantation in both eyes, about 5 years ago. After topographies and trial fitting were performed, it was decided to be fitted with reverse geometry hydrogel lenses, due to the globular geometry of the cornea, the resultant instability of RGPs, and personal preference. In the second case, a 26-year-old female who had also penetrating keratoplasty was fitted with a hydrogel toric lens of high cylinder in the right eye. The final hydrogel lenses for the first subject incorporated a custom tricurve design, in which the second curve was steeper than the base curve and the third curve flatter than the second but still steeper than the first. Visual acuity was 6/7.5 RE and a mediocre 6/15 LE (OU 6/7.5). The second subject achieved 6/4.5 acuity RE with the high cylinder hydrogel toric lens. In corneas exhibiting extreme protrusion, such as keratoglobus and some cases after penetrating keratoplasty, curvatures are so extreme and the cornea so globular leading to specific fitting options: sclerals, small diameter RGPs and reverse geometry hydrogel lenses, in order to improve lens and optical stability. In selected cases such as the above, large diameter inverse geometry RGP may be fitted only if the eyelid shape and tension permits so. The first case demonstrates that the option of hydrogel lenses is viable when the patient has no interest in RGPs and in certain cases can improve vision to satisfactory levels. In other cases, graft toricity might be so high that the practitioner will need to employ hydrogel torics with large amounts of cylinder in order to correct vision. In such cases, the patient should be closely monitored in order to avoid complications from hypoxia.

  17. The behavioral economics of drug self-administration: A review and new analytical approach for within-session procedures

    PubMed Central

    Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary

    2012-01-01

    Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021

  18. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    PubMed

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  19. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  20. Characteristic overpressure-impulse-distance curves for vapour cloud explosions using the TNO Multi-Energy model.

    PubMed

    Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús

    2006-09-21

    A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.

  1. Meta-analysis of single-arm survival studies: a distribution-free approach for estimating summary survival curves with random effects.

    PubMed

    Combescure, Christophe; Foucher, Yohann; Jackson, Daniel

    2014-07-10

    In epidemiologic studies and clinical trials with time-dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta-analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution-free summary survival curve by expanding the product-limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between-study heterogeneity. Statistics I(2)  and H(2) are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between-strata comparison is proposed, with the aim to explore study-level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Numerical approach in defining milling force taking into account curved cutting-edge of applied mills

    NASA Astrophysics Data System (ADS)

    Bondarenko, I. R.

    2018-03-01

    The paper tackles the task of applying the numerical approach to determine the cutting forces of carbon steel machining with curved cutting edge mill. To solve the abovementioned task the curved surface of the cutting edge was subject to step approximation, and the chips section was split into discrete elements. As a result, the cutting force was defined as the sum of elementary forces observed during the cut of every element. Comparison and analysis of calculations with regard to the proposed method and the method with Kienzle dependence showed its sufficient accuracy, which makes it possible to apply the method in practice.

  3. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    NASA Astrophysics Data System (ADS)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on

  4. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    PubMed

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, P<.001). Furthermore, participants differed widely in their readiness to adopt nutrition and fitness apps, ranging from having "decided

  5. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  6. Mapping conduction velocity of early embryonic hearts with a robust fitting algorithm

    PubMed Central

    Gu, Shi; Wang, Yves T; Ma, Pei; Werdich, Andreas A; Rollins, Andrew M; Jenkins, Michael W

    2015-01-01

    Cardiac conduction maturation is an important and integral component of heart development. Optical mapping with voltage-sensitive dyes allows sensitive measurements of electrophysiological signals over the entire heart. However, accurate measurements of conduction velocity during early cardiac development is typically hindered by low signal-to-noise ratio (SNR) measurements of action potentials. Here, we present a novel image processing approach based on least squares optimizations, which enables high-resolution, low-noise conduction velocity mapping of smaller tubular hearts. First, the action potential trace measured at each pixel is fit to a curve consisting of two cumulative normal distribution functions. Then, the activation time at each pixel is determined based on the fit, and the spatial gradient of activation time is determined with a two-dimensional (2D) linear fit over a square-shaped window. The size of the window is adaptively enlarged until the gradients can be determined within a preset precision. Finally, the conduction velocity is calculated based on the activation time gradient, and further corrected for three-dimensional (3D) geometry that can be obtained by optical coherence tomography (OCT). We validated the approach using published activation potential traces based on computer simulations. We further validated the method by adding artificially generated noise to the signal to simulate various SNR conditions using a curved simulated image (digital phantom) that resembles a tubular heart. This method proved to be robust, even at very low SNR conditions (SNR = 2-5). We also established an empirical equation to estimate the maximum conduction velocity that can be accurately measured under different conditions (e.g. sampling rate, SNR, and pixel size). Finally, we demonstrated high-resolution conduction velocity maps of the quail embryonic heart at a looping stage of development. PMID:26114034

  7. Testing feedback-modified dark matter haloes with galaxy rotation curves: estimation of halo parameters and consistency with ΛCDM scaling relations

    NASA Astrophysics Data System (ADS)

    Katz, Harley; Lelli, Federico; McGaugh, Stacy S.; Di Cintio, Arianna; Brook, Chris B.; Schombert, James M.

    2017-04-01

    Cosmological N-body simulations predict dark matter (DM) haloes with steep central cusps (e.g. NFW). This contradicts observations of gas kinematics in low-mass galaxies that imply the existence of shallow DM cores. Baryonic processes such as adiabatic contraction and gas outflows can, in principle, alter the initial DM density profile, yet their relative contributions to the halo transformation remain uncertain. Recent high-resolution, cosmological hydrodynamic simulations by Di Cintio et al. (DC14) predict that inner density profiles depend systematically on the ratio of stellar-to-DM mass (M*/Mhalo). Using a Markov Chain Monte Carlo approach, we test the NFW and the M*/Mhalo-dependent DC14 halo models against a sample of 147 galaxy rotation curves from the new Spitzer Photometry and Accurate Rotation Curves data set. These galaxies all have extended H I rotation curves from radio interferometry as well as accurate stellar-mass-density profiles from near-infrared photometry. The DC14 halo profile provides markedly better fits to the data compared to the NFW profile. Unlike NFW, the DC14 halo parameters found in our rotation-curve fits naturally fall within two standard deviations of the mass-concentration relation predicted by Λ cold dark matter (ΛCDM) and the stellar mass-halo mass relation inferred from abundance matching with few outliers. Halo profiles modified by baryonic processes are therefore more consistent with expectations from ΛCDM cosmology and provide better fits to galaxy rotation curves across a wide range of galaxy properties than do halo models that neglect baryonic physics. Our results offer a solution to the decade long cusp-core discrepancy.

  8. Applying elliptic curve cryptography to a chaotic synchronisation system: neural-network-based approach

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2017-10-01

    In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.

  9. The VMC survey - XXIII. Model fitting of light and radial velocity curves of Small Magellanic Cloud classical Cepheids

    NASA Astrophysics Data System (ADS)

    Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.

    2017-04-01

    We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.

  10. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  11. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    USGS Publications Warehouse

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  12. Cautions regarding the fitting and interpretation of survival curves: examples from NICE single technology appraisals of drugs for cancer.

    PubMed

    Connock, Martin; Hyde, Chris; Moore, David

    2011-10-01

    The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When

  13. Navigation and flight director guidance for the NASA/FAA helicopter MLS curved approach flight test program

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Lee, M. G.

    1985-01-01

    The navigation and flight director guidance systems implemented in the NASA/FAA helicopter microwave landing system (MLS) curved approach flight test program is described. Flight test were conducted at the U.S. Navy's Crows Landing facility, using the NASA Ames UH-lH helicopter equipped with the V/STOLAND avionics system. The purpose of these tests was to investigate the feasibility of flying complex, curved and descending approaches to a landing using MLS flight director guidance. A description of the navigation aids used, the avionics system, cockpit instrumentation and on-board navigation equipment used for the flight test is provided. Three generic reference flight paths were developed and flown during the test. They were as follows: U-Turn, S-turn and Straight-In flight profiles. These profiles and their geometries are described in detail. A 3-cue flight director was implemented on the helicopter. A description of the formulation and implementation of the flight director laws is also presented. Performance data and analysis is presented for one pilot conducting the flight director approaches.

  14. Examining the Earnings Trajectories of Community College Students Using a Piecewise Growth Curve Modeling Approach. A CAPSEE Working Paper

    ERIC Educational Resources Information Center

    Jaggars, Shanna Smith; Xu, Di

    2015-01-01

    Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…

  15. Maximum-likelihood curve-fitting scheme for experiments with pulsed lasers subject to intensity fluctuations.

    PubMed

    Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F

    2003-03-20

    Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.

  16. Fixture For Drilling And Tapping A Curved Workpiece

    NASA Technical Reports Server (NTRS)

    Espinosa, P. S.; Lockyer, R. T.

    1992-01-01

    Simple fixture guides drilling and tapping of holes in prescribed locations and orientations on workpiece having curved surface. Tool conceived for use in reworking complexly curved helicopter blades made of composite materials. Fixture is block of rigid foam with epoxy filler, custom-fitted to surface contour, containing bushings and sleeves at drilling and tapping sites. Bushings changed, so taps and drills of various sizes accommodated. In use, fixture secured to surface by hold-down bolts extending through sleeves and into threads in substrate.

  17. Physical fitness reference standards in fibromyalgia: The al-Ándalus project.

    PubMed

    Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B

    2017-11-01

    We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P < 0.001) in all age ranges (P < 0.001). This study provides a comprehensive description of age-specific physical fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Rotation curve for the Milky Way galaxy in conformal gravity

    NASA Astrophysics Data System (ADS)

    O'Brien, James G.; Moss, Robert J.

    2015-05-01

    Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.

  19. A multiple biomarker risk score for guiding clinical decisions using a decision curve approach.

    PubMed

    Hughes, Maria F; Saarela, Olli; Blankenberg, Stefan; Zeller, Tanja; Havulinna, Aki S; Kuulasmaa, Kari; Yarnell, John; Schnabel, Renate B; Tiret, Laurence; Salomaa, Veikko; Evans, Alun; Kee, Frank

    2012-08-01

    We assessed whether a cardiovascular risk model based on classic risk factors (e.g. cholesterol, blood pressure) could refine disease prediction if it included novel biomarkers (C-reactive protein, N-terminal pro-B-type natriuretic peptide, troponin I) using a decision curve approach which can incorporate clinical consequences. We evaluated whether a model including biomarkers and classic risk factors could improve prediction of 10 year risk of cardiovascular disease (CVD; chronic heart disease and ischaemic stroke) against a classic risk factor model using a decision curve approach in two prospective MORGAM cohorts. This included 7739 men and women with 457 CVD cases from the FINRISK97 cohort; and 2524 men with 259 CVD cases from PRIME Belfast. The biomarker model improved disease prediction in FINRISK across the high-risk group (20-40%) but not in the intermediate risk group, at the 23% risk threshold net benefit was 0.0033 (95% CI 0.0013-0.0052). However, in PRIME Belfast the net benefit of decisions guided by the decision curve was improved across intermediate risk thresholds (10-20%). At p(t) = 10% in PRIME, the net benefit was 0.0059 (95% CI 0.0007-0.0112) with a net increase in 6 true positive cases per 1000 people screened and net decrease of 53 false positive cases per 1000 potentially leading to 5% fewer treatments in patients not destined for an event. The biomarker model improves 10-year CVD prediction at intermediate and high-risk thresholds and in particular, could be clinically useful at advising middle-aged European males of their CVD risk.

  20. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves

    NASA Astrophysics Data System (ADS)

    Taff, Laurence

    2013-03-01

    The objective was to, for the first time in a century, scientifically analyze the ``rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related

  1. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  2. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  3. End-member modelling of isothermal remanent magnetization (IRM) acquisition curves: a novel approach to diagnose remagnetization

    NASA Astrophysics Data System (ADS)

    Gong, Z.; Dekkers, M. J.; Heslop, D.; Mullender, T. A. T.

    2009-08-01

    To identify remagnetization is essential for palaeomagnetic studies and their geodynamic implications. The traditional approach is often based on directional analysis of palaeomagnetic data and field tests, which may be inconclusive if the apparent polar wander path (APWP) is poorly constrained or if the remagnetization predates folding. In several cases, rock magnetic work, particularly, the measurement of hysteresis loops allows identification of the so-called `remagnetized' and `non-remagnetized' trends. However, for weakly magnetic samples, this approach can be equivocal. Here, to improve the diagnosis of remagnetization, we investigated 192 isothermal remanent magnetization (IRM) acquisition curves (up to 700 mT) of remagnetized and non-remagnetized limestones from the Organyà Basin, northern Spain. Also, 96 IRM acquisition curves from non-remagnetized marls were studied as a cross-check for the non-remagnetized limestones. A non-parametric end-member modelling approach is used to analyse the IRM acquisition curve data sets. First, remagnetized and non-remagnetized groups were treated separately. Two or three end-members were found to adequately describe the data variability: one end-member represents the high-coercivity contribution, whereas the low-coercivity part can be described by either one end-member or two reasonably similar end-members. In the remagnetized limestones, the low-coercivity end-members tend to saturate at higher field values than in the non-remagnetized limestones. When the entire data set was processed together, a three-end-member model was judged optimal. This model consists of a high-coercivity end-member, a low-coercivity end-member that saturates at ~300-400 mT and a low-coercivity end-member that approximately saturates at 700 mT. Higher contributions of the latter end-member appear to occur dominantly in the remagnetized limestones, whereas the reverse is true for the non-remagnetized limestones, so they plot in clearly

  4. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  5. Langevin Equation on Fractal Curves

    NASA Astrophysics Data System (ADS)

    Satin, Seema; Gangal, A. D.

    2016-07-01

    We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.

  6. Trend analyses with river sediment rating curves

    USGS Publications Warehouse

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  7. High pressure melting curve of platinum up to 35 GPa

    NASA Astrophysics Data System (ADS)

    Patel, Nishant N.; Sunder, Meenakshi

    2018-04-01

    Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.

  8. Enhancements of Bayesian Blocks; Application to Large Light Curve Databases

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2015-01-01

    Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).

  9. New approach in the evaluation of a fitness program at a worksite.

    PubMed

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  10. Trait-fitness relationships determine how trade-off shapes affect species coexistence.

    PubMed

    Ehrlich, Elias; Becks, Lutz; Gaedke, Ursula

    2017-12-01

    Trade-offs between functional traits are ubiquitous in nature and can promote species coexistence depending on their shape. Classic theory predicts that convex trade-offs facilitate coexistence of specialized species with extreme trait values (extreme species) while concave trade-offs promote species with intermediate trait values (intermediate species). We show here that this prediction becomes insufficient when the traits translate non-linearly into fitness which frequently occurs in nature, e.g., an increasing length of spines reduces grazing losses only up to a certain threshold resulting in a saturating or sigmoid trait-fitness function. We present a novel, general approach to evaluate the effect of different trade-off shapes on species coexistence. We compare the trade-off curve to the invasion boundary of an intermediate species invading the two extreme species. At this boundary, the invasion fitness is zero. Thus, it separates trait combinations where invasion is or is not possible. The invasion boundary is calculated based on measurable trait-fitness relationships. If at least one of these relationships is not linear, the invasion boundary becomes non-linear, implying that convex and concave trade-offs not necessarily lead to different coexistence patterns. Therefore, we suggest a new ecological classification of trade-offs into extreme-favoring and intermediate-favoring which differs from a purely mathematical description of their shape. We apply our approach to a well-established model of an empirical predator-prey system with competing prey types facing a trade-off between edibility and half-saturation constant for nutrient uptake. We show that the survival of the intermediate prey depends on the convexity of the trade-off. Overall, our approach provides a general tool to make a priori predictions on the outcome of competition among species facing a common trade-off in dependence of the shape of the trade-off and the shape of the trait-fitness

  11. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    PubMed

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  13. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  14. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  15. Quality Quandaries: Predicting a Population of Curves

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-12-19

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  16. Quality Quandaries: Predicting a Population of Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  17. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  18. ASTEROID LIGHT CURVES FROM THE PALOMAR TRANSIENT FACTORY SURVEY: ROTATION PERIODS AND PHASE FUNCTIONS FROM SPARSE PHOTOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waszczak, Adam; Chang, Chan-Kao; Cheng, Yu-Chi

    We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we findmore » that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric

  19. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  20. The S-curve for forecasting waste generation in construction projects.

    PubMed

    Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling

    2016-10-01

    Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Learning curves in highly skilled chess players: a test of the generality of the power law of practice.

    PubMed

    Howard, Robert W

    2014-09-01

    The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Flight-test evaluation of STOL control and flight director concepts in a powered-lift aircraft flying curved decelerating approaches

    NASA Technical Reports Server (NTRS)

    Hindson, W. S.; Hardy, G. H.; Innis, R. C.

    1981-01-01

    Flight tests were carried out to assess the feasibility of piloted steep curved, and decelerating approach profiles in powered lift STOL aircraft. Several STOL control concepts representative of a variety of aircraft were evaluated in conjunction with suitably designed flight directions. The tests were carried out in a real navigation environment, employed special electronic cockpit displays, and included the development of the performance achieved and the control utilization involved in flying 180 deg turning, descending, and decelerating approach profiles to landing. The results suggest that such moderately complex piloted instrument approaches may indeed be feasible from a pilot acceptance point of view, given an acceptable navigation environment. Systems with the capability of those used in this experiment can provide the potential of achieving instrument operations on curved, descending, and decelerating landing approaches to weather minima corresponding to CTOL Category 2 criteria, while also providing a means of realizing more efficient operations during visual flight conditions.

  3. The Quality Fit.

    ERIC Educational Resources Information Center

    Vertiz, Virginia C.; Downey, Carolyn J.

    This paper proposes a two-pronged approach for examining an educational program's "quality of fit." The American Association of School Administrators' (AASA's) Curriculum Management Audit for quality indicators is reviewed, using the Downey Quality Fit Framework and Deming's 4 areas of profound knowledge and 14 points. The purpose is to…

  4. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  5. Kinetic approach for the enzymatic determination of levodopa and carbidopa assisted by multivariate curve resolution-alternating least squares.

    PubMed

    Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S

    2010-07-12

    A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Work-to-Family Conflict, Positive Spillover, and Boundary Management: A Person-Environment Fit Approach

    ERIC Educational Resources Information Center

    Chen, Zheng; Powell, Gary N.; Greenhaus, Jeffrey H.

    2009-01-01

    This study adopted a person-environment fit approach to examine whether greater congruence between employees' preferences for segmenting their work domain from their family domain (i.e., keeping work matters at work) and what their employers' work environment allowed would be associated with lower work-to-family conflict and higher work-to-family…

  7. Estimation of fast and slow wave properties in cancellous bone using Prony's method and curve fitting.

    PubMed

    Wear, Keith A

    2013-04-01

    The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.

  8. Light extraction block with curved surface

    DOEpatents

    Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.

    2016-03-22

    Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.

  9. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  10. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  11. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    NASA Astrophysics Data System (ADS)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  12. State Authenticity as Fit to Environment: The Implications of Social Identity for Fit, Authenticity, and Self-Segregation.

    PubMed

    Schmader, Toni; Sedikides, Constantine

    2017-10-01

    People seek out situations that "fit," but the concept of fit is not well understood. We introduce State Authenticity as Fit to the Environment (SAFE), a conceptual framework for understanding how social identities motivate the situations that people approach or avoid. Drawing from but expanding the authenticity literature, we first outline three types of person-environment fit: self-concept fit, goal fit, and social fit. Each type of fit, we argue, facilitates cognitive fluency, motivational fluency, and social fluency that promote state authenticity and drive approach or avoidance behaviors. Using this model, we assert that contexts subtly signal social identities in ways that implicate each type of fit, eliciting state authenticity for advantaged groups but state inauthenticity for disadvantaged groups. Given that people strive to be authentic, these processes cascade down to self-segregation among social groups, reinforcing social inequalities. We conclude by mapping out directions for research on relevant mechanisms and boundary conditions.

  13. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  14. Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md

    2017-08-01

    This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.

  15. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    NASA Astrophysics Data System (ADS)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  16. Fluorometric titration approach for calibration of quantity of binding site of purified monoclonal antibody recognizing epitope/hapten nonfluorescent at 340 nm.

    PubMed

    Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei

    2014-06-17

    A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.

  17. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    PubMed

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  18. Comparing dark matter models, modified Newtonian dynamics and modified gravity in accounting for galaxy rotation curves

    NASA Astrophysics Data System (ADS)

    Li, Xin; Tang, Li; Lin, Hai-Nan

    2017-05-01

    We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)

  19. Mild angle early onset idiopathic scoliosis children avoid progression under FITS method (Functional Individual Therapy of Scoliosis).

    PubMed

    Białek, Marianna

    2015-05-01

    Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,p<0.0001, paired t-test. The angle of trunk rotation decreased significantly from 4.782.98 to 3.282.58 at last evaluation, p<0.0001,paired t-test.FITS

  20. Learning Curves: Making Quality Online Health Information Available at a Fitness Center.

    PubMed

    Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.

  1. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  2. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical

  3. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE PAGES

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-09-17

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  4. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  5. The ontogeny of tolerance curves: habitat quality vs. acclimation in a stressful environment.

    PubMed

    Nougué, Odrade; Svendsen, Nils; Jabbour-Zahab, Roula; Lenormand, Thomas; Chevin, Luis-Miguel

    2016-11-01

    Stressful environments affect life-history components of fitness through (i) instantaneous detrimental effects, (ii) historical (carry-over) effects and (iii) history-by-environment interactions, including acclimation effects. The relative contributions of these different responses to environmental stress are likely to change along life, but such ontogenic perspective is often overlooked in studies of tolerance curves, precluding a better understanding of the causes of costs of acclimation, and more generally of fitness in temporally fine-grained environments. We performed an experiment in the brine shrimp Artemia to disentangle these different contributions to environmental tolerance, and investigate how they unfold along life. We placed individuals from three clones of A. parthenogenetica over a range of salinities during a week, before transferring them to a (possibly) different salinity for the rest of their lives. We monitored individual survival at repeated intervals throughout life, instead of measuring survival or performance at a given point in time, as commonly done in acclimation experiments. We then designed a modified survival analysis model to estimate phase-specific hazard rates, accounting for the fact that individuals may share the same treatment for only part of their lives. Our approach allowed us to distinguish effects of salinity on (i) instantaneous mortality in each phase (habitat quality effects), (ii) mortality later in life (history effects) and (iii) their interaction. We showed clear effects of early salinity on late survival and interactions between effects of past and current environments on survival. Importantly, analysis of the ontogenetic dynamics of the tolerance curve reveals that acclimation affects different parts of the curve at different ages. Adopting a dynamical view of the ontogeny of tolerance curve should prove useful for understanding niche limits in temporally changing environments, where the full sequence of

  6. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  7. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  8. Thymic Fatness and Approaches to Enhance Thymopoietic Fitness in Aging

    PubMed Central

    Dixit, Vishwa Deep

    2010-01-01

    Summary With advancing age, the thymus undergoes striking fibrotic and fatty changes that culminate in its transformation into adipose tissue. As the thymus involutes, reduction in thymocytes and thymic epithelial cells precede the emergence of mature lipid-laden adipocytes. Dogma dictates that adipocytes are ‘passive’ cells that occupy non-epithelial thymic space or ‘infiltrate’ the non cellular thymic niches. The provenance and purpose of ectopic thymic adipocytes during aging in an organ that is required for establishment and maintenance of T cell repertoire remains an unsolved puzzle. Nonetheless, tantalizing clues about elaborate reciprocal relationship between thymic fatness and thymopoietic fitness are emerging. Blocking or bypassing the route towards thymic adiposity may complement the approaches to rejuvenate thymopoiesis and immunity in elderly. PMID:20650623

  9. Thymic fatness and approaches to enhance thymopoietic fitness in aging.

    PubMed

    Dixit, Vishwa Deep

    2010-08-01

    With advancing age, the thymus undergoes striking fibrotic and fatty changes that culminate in its transformation into adipose tissue. As the thymus involutes, reduction in thymocytes and thymic epithelial cells precede the emergence of mature lipid-laden adipocytes. Dogma dictates that adipocytes are 'passive' cells that occupy non-epithelial thymic space or 'infiltrate' the non-cellular thymic niches. The provenance and purpose of ectopic thymic adipocytes during aging in an organ that is required for establishment and maintenance of T cell repertoire remains an unsolved puzzle. Nonetheless, tantalizing clues about elaborate reciprocal relationship between thymic fatness and thymopoietic fitness are emerging. Blocking or bypassing the route toward thymic adiposity may complement the approaches to rejuvenate thymopoiesis and immunity in elderly. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Comparison between two scalar field models using rotation curves of spiral galaxies

    NASA Astrophysics Data System (ADS)

    Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh

    2018-04-01

    Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.

  11. A Healthy Approach to Fitness Center Security.

    ERIC Educational Resources Information Center

    Sturgeon, Julie

    2000-01-01

    Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)

  12. Fit for purpose? Introducing a rational priority setting approach into a community care setting.

    PubMed

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2016-06-20

    Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.

  13. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  14. Learning Curves: Making Quality Online Health Information Available at a Fitness Center

    PubMed Central

    Dobbins, Montie T.; Tarver, Talicia; Adams, Mararia; Jones, Dixie A.

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center – Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges. PMID:22545020

  15. Symmetry Properties of Potentiometric Titration Curves.

    ERIC Educational Resources Information Center

    Macca, Carlo; Bombi, G. Giorgio

    1983-01-01

    Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)

  16. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    PubMed

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  17. SU-F-I-63: Relaxation Times of Lipid Resonances in NAFLD Animal Model Using Enhanced Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, K-H; Yoo, C-H; Lim, S-I

    Purpose: The objective of this study is to evaluate the relaxation time of methylene resonance in comparison with other lipid resonances. Methods: The examinations were performed on a 3.0T MRI scanner using a four-channel animal coil. Eight more Sprague-Dawley rats in the same baseline weight range were housed with ad libitum access to water and a high-fat (HF) diet (60% fat, 20% protein, and 20% carbohydrate). In order to avoid large blood vessels, a voxel (0.8×0.8×0.8 cm{sup 3}) was placed in a homogeneous area of the liver parenchyma during free breathing. Lipid relaxations in NC and HF diet rats weremore » estimated at a fixed repetition time (TR) of 6000 msec, and multi echo time (TEs) of 40–220 msec. All spectra for data measurement were processed using the Advanced Method for Accurate, Robust, and Efficient Spectral (AMARES) fitting algorithm of the Java-based Magnetic Resonance User Interface (jMRUI) package. Results: The mean T2 relaxation time of the methylene resonance in normal-chow diet was 37.1 msec (M{sub 0}, 2.9±0.5), with a standard deviation of 4.3 msec. The mean T2 relaxation time of the methylene resonance was 31.4 msec (M{sub 0}, 3.7±0.3), with a standard deviation of 1.8 msec. The T2 relaxation times of methylene protons were higher in normal-chow diet rats than in HF rats (p<0.05), and the extrapolated M{sub 0} values were higher in HF rats than in NC rats (p<0.005). The excellent linear fit with R{sup 2}>0.9971 and R{sup 2}>0.9987 indicates T2 relaxation decay curves with mono-exponential function. Conclusion: In in vivo, a sufficient spectral resolution and a sufficiently high signal-to-noise ratio (SNR) can be achieved, so that the data measured over short TE values can be extrapolated back to TE = 0 to produce better estimates of the relative weights of the spectral components. In the short term, treating the effective decay rate as exponential is an adequate approximation.« less

  18. Evaluation of bilirubin concentration in hemolysed samples, is it really impossible? The altitude-curve cartography approach to interfered assays.

    PubMed

    Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio

    2011-04-11

    Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. An assessment of mode-coupling and falling-friction mechanisms in railway curve squeal through a simplified approach

    NASA Astrophysics Data System (ADS)

    Ding, Bo; Squicciarini, Giacomo; Thompson, David; Corradi, Roberto

    2018-06-01

    Curve squeal is one of the most annoying types of noise caused by the railway system. It usually occurs when a train or tram is running around tight curves. Although this phenomenon has been studied for many years, the generation mechanism is still the subject of controversy and not fully understood. A negative slope in the friction curve under full sliding has been considered to be the main cause of curve squeal for a long time but more recently mode coupling has been demonstrated to be another possible explanation. Mode coupling relies on the inclusion of both the lateral and vertical dynamics at the contact and an exchange of energy occurs between the normal and the axial directions. The purpose of this paper is to assess the role of the mode-coupling and falling-friction mechanisms in curve squeal through the use of a simple approach based on practical parameter values representative of an actual situation. A tramway wheel is adopted to study the effect of the adhesion coefficient, the lateral contact position, the contact angle and the damping ratio. Cases corresponding to both inner and outer wheels in the curve are considered and it is shown that there are situations in which both wheels can squeal due to mode coupling. Additionally, a negative slope is introduced in the friction curve while keeping active the vertical dynamics in order to analyse both mechanisms together. It is shown that, in the presence of mode coupling, the squealing frequency can differ from the natural frequency of either of the coupled wheel modes. Moreover, a phase difference between wheel vibration in the vertical and lateral directions is observed as a characteristic of mode coupling. For both these features a qualitative comparison is shown with field measurements which show the same behaviour.

  20. Learning curves in health professions education.

    PubMed

    Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A

    2015-08-01

    Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.

  1. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  2. Static and dynamic pressure prediction for prosthetic socket fitting assessment utilising an inverse problem approach.

    PubMed

    Sewell, Philip; Noroozi, Siamak; Vinney, John; Amali, Ramin; Andrews, Stephen

    2012-01-01

    It has been recognised in a review of the developments of lower-limb prosthetic socket fitting processes that the future demands new tools to aid in socket fitting. This paper presents the results of research to design and clinically test an artificial intelligence approach, specifically inverse problem analysis, for the determination of the pressures at the limb/prosthetic socket interface during stance and ambulation. Inverse problem analysis is based on accurately calculating the external loads or boundary conditions that can generate a known amount of strain, stresses or displacements at pre-determined locations on a structure. In this study a backpropagation artificial neural network (ANN) is designed and validated to predict the interfacial pressures at the residual limb/socket interface from strain data collected from the socket surface. The subject of this investigation was a 45-year-old male unilateral trans-tibial (below-knee) traumatic amputee who had been using a prosthesis for 22 years. When comparing the ANN predicted interfacial pressure on 16 patches within the socket with actual pressures applied to the socket there is shown to be 8.7% difference, validating the methodology. Investigation of varying axial load through the subject's prosthesis, alignment of the subject's prosthesis, and pressure at the limb/socket interface during walking demonstrates that the validated ANN is able to give an accurate full-field study of the static and dynamic interfacial pressure distribution. To conclude, a methodology has been developed that enables a prosthetist to quantitatively analyse the distribution of pressures within the prosthetic socket in a clinical environment. This will aid in facilitating the "right first time" approach to socket fitting which will benefit both the patient in terms of comfort and the prosthetist, by reducing the time and associated costs of providing a high level of socket fit. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  4. Recalcitrant vulnerability curves: methods of analysis and the concept of fibre bridges for enhanced cavitation resistance.

    PubMed

    Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T

    2014-01-01

    Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.

  5. The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates

    ERIC Educational Resources Information Center

    Sivo, Stephen; Fan, Xitao; Witta, Lea

    2005-01-01

    The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…

  6. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  7. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  8. MICA: Multiple interval-based curve alignment

    NASA Astrophysics Data System (ADS)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  9. [Age index and an interpretation of survivorship curves (author's transl)].

    PubMed

    Lohmann, W

    1977-01-01

    Clinical investigations showed that the age dependences of physiological functions do not show -- as generally assumed -- a linear increase with age, but an exponential one. Considering this result one can easily interpret the survivorship curve of a population (Gompertz plot). The only thing that is required is that the probability of death (death rate) is proportional to a function of ageing given by mu(t) = mu0 exp (alpha t). Considering survivorship curves resulting from annual death statistics and fitting them by suitable parameters, then the resulting alpha-values are in agreement with clinical data.

  10. Development and Assessment of a New Empirical Model for Predicting Full Creep Curves

    PubMed Central

    Gray, Veronica; Whittaker, Mark

    2015-01-01

    This paper details the development and assessment of a new empirical creep model that belongs to the limited ranks of models reproducing full creep curves. The important features of the model are that it is fully standardised and is universally applicable. By standardising, the user no longer chooses functions but rather fits one set of constants only. Testing it on 7 contrasting materials, reproducing 181 creep curves we demonstrate its universality. New model and Theta Projection curves are compared to one another using an assessment tool developed within this paper. PMID:28793458

  11. MODELING OF METAL BINDING ON HUMIC SUBSTANCES USING THE NIST DATABASE: AN A PRIORI FUNCTIONAL GROUP APPROACH

    EPA Science Inventory

    Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...

  12. Can Tooth Preparation Design Affect the Fit of CAD/CAM Restorations?

    PubMed

    Roperto, Renato Cassio; Oliveira, Marina Piolli; Porto, Thiago Soares; Ferreira, Lais Alaberti; Melo, Lucas Simino; Akkus, Anna

    2017-03-01

    The purpose of this study was to evaluate if the marginal fit of computer-aided design and computer-aided manufacturing (CAD/CAM) restorations manufactured with CAD/CAM systems can be affected by different tooth preparation designs. Twenty-six typodont (plastic) teeth were divided into two groups (n = 13) according to the occlusal curvature of the tooth preparation. These were the group 1 (control group) (flat occlusal design) and group 2 (curved occlusal design). Scanning of the preparations was performed, and crowns were milled using ceramic blocks. Blocks were cemented using epoxy glue on the pulpal floor only, and finger pressure was applied for 1 minute. On completion of the cementation step, poor fits between the restoration and abutment were measured by microphotography and the silicone replica technique using light-body silicon material on mesial, distal, buccal, and lingual surfaces. Two-way ANOVA analysis did not reveal a statistical difference between flat (83.61 ± 50.72) and curved (79.04 ± 30.97) preparation designs. Buccal, mesial, lingual, and distal sites on the curved design preparation showed less of a gap when compared with flat design. No difference was found on flat preparations among mesial, buccal, and distal sites (P < .05). The lingual aspect had no difference from the distal side but showed a statistically significant difference from mesial and buccal (P < .05). Difference in occlusal design did not significantly impact the marginal fit. Marginal fit was significantly affected by the location of the margin; lingual and distal locations exhibited greater margin gap values compared with buccal and mesial sites regardless of the preparation design.

  13. The structure of binding curves and practical identifiability of equilibrium ligand-binding parameters

    PubMed Central

    Middendorf, Thomas R.

    2017-01-01

    A critical but often overlooked question in the study of ligands binding to proteins is whether the parameters obtained from analyzing binding data are practically identifiable (PI), i.e., whether the estimates obtained from fitting models to noisy data are accurate and unique. Here we report a general approach to assess and understand binding parameter identifiability, which provides a toolkit to assist experimentalists in the design of binding studies and in the analysis of binding data. The partial fraction (PF) expansion technique is used to decompose binding curves for proteins with n ligand-binding sites exactly and uniquely into n components, each of which has the form of a one-site binding curve. The association constants of the PF component curves, being the roots of an n-th order polynomial, may be real or complex. We demonstrate a fundamental connection between binding parameter identifiability and the nature of these one-site association constants: all binding parameters are identifiable if the constants are all real and distinct; otherwise, at least some of the parameters are not identifiable. The theory is used to construct identifiability maps from which the practical identifiability of binding parameters for any two-, three-, or four-site binding curve can be assessed. Instructions for extending the method to generate identifiability maps for proteins with more than four binding sites are also given. Further analysis of the identifiability maps leads to the simple rule that the maximum number of structurally identifiable binding parameters (shown in the previous paper to be equal to n) will also be PI only if the binding curve line shape contains n resolved components. PMID:27993951

  14. The Workload Curve: Subjective Mental Workload.

    PubMed

    Estes, Steven

    2015-11-01

    In this paper I begin looking for evidence of a subjective workload curve. Results from subjective mental workload assessments are often interpreted linearly. However, I hypothesized that ratings of subjective mental workload increase nonlinearly with unitary increases in working memory load. Two studies were conducted. In the first, the participant provided ratings of the mental difficulty of a series of digit span recall tasks. In the second study, participants provided ratings of mental difficulty associated with recall of visual patterns. The results of the second study were then examined using a mathematical model of working memory. An S curve, predicted a priori, was found in the results of both the digit span and visual pattern studies. A mathematical model showed a tight fit between workload ratings and levels of working memory activation. This effort provides good initial evidence for the existence of a workload curve. The results support further study in applied settings and other facets of workload (e.g., temporal workload). Measures of subjective workload are used across a wide variety of domains and applications. These results bear on their interpretation, particularly as they relate to workload thresholds. © 2015, Human Factors and Ergonomics Society.

  15. REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.

    Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracingmore » model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.« less

  16. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  17. Dust Attenuation Curves in the Local Universe: Demographics and New Laws for Star-forming Galaxies and High-redshift Analogs

    NASA Astrophysics Data System (ADS)

    Salim, Samir; Boquien, Médéric; Lee, Janice C.

    2018-05-01

    We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).

  18. Empirical models for fitting of oral concentration time curves with and without an intravenous reference.

    PubMed

    Weiss, Michael

    2017-06-01

    Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.

  19. Development of a program to fit data to a new logistic model for microbial growth.

    PubMed

    Fujikawa, Hiroshi; Kano, Yoshihiro

    2009-06-01

    Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.

  20. Intensity Conserving Spectral Fitting

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  1. Fitting relationship between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam

    NASA Astrophysics Data System (ADS)

    Ji, Zhong-Ye; Zhang, Xiao-Fang

    2018-01-01

    The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.

  2. A new curriculum for fitness education.

    PubMed Central

    Boone, J L

    1983-01-01

    Regular exercise is important in a preventive approach to health care because it exerts a beneficial effect on many risk factors in the development of coronary heart disease. However, many Americans lack the skills required to devise and carry out a safe and effective exercise program appropriate for a life-time of fitness. This inability is partly due to the lack of fitness education during their school years. School programs in physical education tend to neglect training in the health-related aspects of fitness. Therefore, a new curriculum for fitness education is proposed that would provide seventh, eighth, and ninth grade students with (a) a basic knowledge of their physiological response to exercise, (b) the means to develop their own safe and effective physical fitness program, and (c) the motivation to incorporate regular exercise into their lifestyle. This special 4-week segment of primarily academic study is designed to be inserted into the physical education curriculum. Daily lessons cover health-related fitness, cardiovascular fitness, body fitness, and care of the back. A final written examination covering major areas of information is given to emphasize this academic approach to exercise. Competition in athletic ability is deemphasized, and motivational awards are given based on health-related achievements. The public's present lack of knowledge about physical fitness, coupled with the numerous anatomical and physiological benefits derived from regular, vigorous exercise, mandate an intensified curriculum of fitness education for school children. PMID:6414039

  3. Exploring Person Fit with an Approach Based on Multilevel Logistic Regression

    ERIC Educational Resources Information Center

    Walker, A. Adrienne; Engelhard, George, Jr.

    2015-01-01

    The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…

  4. Gamma-Ray Pulsar Light Curves as Probes of Magnetospheric Structure

    NASA Technical Reports Server (NTRS)

    Harding, A. K.

    2016-01-01

    The large number of gamma-ray pulsars discovered by the Fermi Gamma-Ray Space Telescope since its launch in 2008 dwarfs the handful that were previously known. The variety of observed light curves makes possible a tomography of both the ensemble-averaged field structure and the high-energy emission regions of a pulsar magnetosphere. Fitting the gamma-ray pulsar light curves with model magnetospheres and emission models has revealed that most of the high-energy emission, and the particles acceleration, takes place near or beyond the light cylinder, near the current sheet. As pulsar magnetosphere models become more sophisticated, it is possible to probe magnetic field structure and emission that are self-consistently determined. Light curve modeling will continue to be a powerful tool for constraining the pulsar magnetosphere physics.

  5. Line fitting based feature extraction for object recognition

    NASA Astrophysics Data System (ADS)

    Li, Bing

    2014-06-01

    Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.

  6. Computation of the phase response curve: a direct numerical approach.

    PubMed

    Govaerts, W; Sautois, B

    2006-04-01

    Neurons are often modeled by dynamical systems--parameterized systems of differential equations. A typical behavioral pattern of neurons is periodic spiking; this corresponds to the presence of stable limit cycles in the dynamical systems model. The phase resetting and phase response curves (PRCs) describe the reaction of the spiking neuron to an input pulse at each point of the cycle. We develop a new method for computing these curves as a by-product of the solution of the boundary value problem for the stable limit cycle. The method is mathematically equivalent to the adjoint method, but our implementation is computationally much faster and more robust than any existing method. In fact, it can compute PRCs even where the limit cycle can hardly be found by time integration, for example, because it is close to another stable limit cycle. In addition, we obtain the discretized phase response curve in a form that is ideally suited for most applications. We present several examples and provide the implementation in a freely available Matlab code.

  7. A Global Optimization Method to Calculate Water Retention Curves

    NASA Astrophysics Data System (ADS)

    Maggi, S.; Caputo, M. C.; Turturro, A. C.

    2013-12-01

    Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from

  8. A Bayesian beta distribution model for estimating rainfall IDF curves in a changing climate

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Kwon, Hyun-Han; Kim, Jin-Young

    2016-09-01

    The estimation of intensity-duration-frequency (IDF) curves for rainfall data comprises a classical task in hydrology studies to support a variety of water resources projects, including urban drainage and the design of flood control structures. In a changing climate, however, traditional approaches based on historical records of rainfall and on the stationary assumption can be inadequate and lead to poor estimates of rainfall intensity quantiles. Climate change scenarios built on General Circulation Models offer a way to access and estimate future changes in spatial and temporal rainfall patterns at the daily scale at the utmost, which is not as fine temporal resolution as required (e.g. hours) to directly estimate IDF curves. In this paper we propose a novel methodology based on a four-parameter beta distribution to estimate IDF curves conditioned on the observed (or simulated) daily rainfall, which becomes the time-varying upper bound of the updated nonstationary beta distribution. The inference is conducted in a Bayesian framework that provides a better way to take into account the uncertainty in the model parameters when building the IDF curves. The proposed model is tested using rainfall data from four stations located in South Korea and projected climate change Representative Concentration Pathways (RCPs) scenarios 6 and 8.5 from the Met Office Hadley Centre HadGEM3-RA model. The results show that the developed model fits the historical data as good as the traditional Generalized Extreme Value (GEV) distribution but is able to produce future IDF curves that significantly differ from the historically based IDF curves. The proposed model predicts for the stations and RCPs scenarios analysed in this work an increase in the intensity of extreme rainfalls of short duration with long return periods.

  9. Gamma-Ray Pulsar Light Curves in Vacuum and Force-Free Geometry

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.; DeCesar, Megan E.; Miller, M. Coleman; Kalapotharakos, Constantinos; Contopoulos, Ioannis

    2011-01-01

    Recent studies have shown that gamma-ray pulsar light curves are very sensitive to the geometry of the pulsar magnetic field. Pulsar magnetic field geometries, such as the retarded vacuum dipole and force-free magnetospheres have distorted polar caps that are offset from the magnetic axis in the direction opposite to rotation. Since this effect is due to the sweepback of field lines near the light cylinder, offset polar caps are a generic property of pulsar magnetospheres and their effects should be included in gamma-ray pulsar light curve modeling. In slot gap models (having two-pole caustic geometry), the offset polar caps cause a strong azimuthal asymmetry of the particle acceleration around the magnetic axis. We have studied the effect of the offset polar caps in both retarded vacuum dipole and force-free geometry on the model high-energy pulse profiles. We find that, compared to the profiles derived from symmetric caps, the flux in the pulse peaks, which are caustics formed along the trailing magnetic field lines, increases significantly relative to the off-peak emission, formed along leading field lines. The enhanced contrast produces improved slot gap model fits to Fermi pulsar light curves like Vela, with vacuum dipole fits being more favorable.

  10. A methodological approach to short-term tracking of youth physical fitness: the Oporto Growth, Health and Performance Study.

    PubMed

    Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José

    2016-10-01

    In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns.

  11. Computer user's manual for a generalized curve fit and plotting program

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.

    1973-01-01

    A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.

  12. Multi-q pattern classification of polarization curves

    NASA Astrophysics Data System (ADS)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  13. The light curve of SN 1987A revisited: constraining production masses of radioactive nuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seitenzahl, Ivo R.; Timmes, F. X.; Magkotsios, Georgios, E-mail: ivo.seitenzahl@anu.edu.au

    2014-09-01

    We revisit the evidence for the contribution of the long-lived radioactive nuclides {sup 44}Ti, {sup 55}Fe, {sup 56}Co, {sup 57}Co, and {sup 60}Co to the UVOIR light curve of SN 1987A. We show that the V-band luminosity constitutes a roughly constant fraction of the bolometric luminosity between 900 and 1900 days, and we obtain an approximate bolometric light curve out to 4334 days by scaling the late time V-band data by a constant factor where no bolometric light curve data is available. Considering the five most relevant decay chains starting at {sup 44}Ti, {sup 55}Co, {sup 56}Ni, {sup 57}Ni, andmore » {sup 60}Co, we perform a least squares fit to the constructed composite bolometric light curve. For the nickel isotopes, we obtain best fit values of M({sup 56}Ni) = (7.1 ± 0.3) × 10{sup –2} M {sub ☉} and M({sup 57}Ni) = (4.1 ± 1.8) × 10{sup –3} M {sub ☉}. Our best fit {sup 44}Ti mass is M({sup 44}Ti) = (0.55 ± 0.17) × 10{sup –4} M {sub ☉}, which is in disagreement with the much higher (3.1 ± 0.8) × 10{sup –4} M {sub ☉} recently derived from INTEGRAL observations. The associated uncertainties far exceed the best fit values for {sup 55}Co and {sup 60}Co and, as a result, we only give upper limits on the production masses of M({sup 55}Co) < 7.2 × 10{sup –3} M {sub ☉} and M({sup 60}Co) < 1.7 × 10{sup –4} M {sub ☉}. Furthermore, we find that the leptonic channels in the decay of {sup 57}Co (internal conversion and Auger electrons) are a significant contribution and constitute up to 15.5% of the total luminosity. Consideration of the kinetic energy of these electrons is essential in lowering our best fit nickel isotope production ratio to [{sup 57}Ni/{sup 56}Ni] = 2.5 ± 1.1, which is still somewhat high but is in agreement with gamma-ray observations and model predictions.« less

  14. Evaluation of Innovative Approaches to Curve Delineation for Two-Lane Rural Roads

    DOT National Transportation Integrated Search

    2018-06-01

    Run-off-road crashes are a major problem for rural roads. These roads tend to be unlit, and drivers may have difficulty seeing or correctly predicting the curvature of horizontal curves. This leads to vehicles entering horizontal curves at speeds tha...

  15. Optical Rotation Curves and Linewidths for Tully-Fisher Applications

    NASA Astrophysics Data System (ADS)

    Courteau, Stephane

    1997-12-01

    We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.

  16. Investigation of erosion behavior in different pipe-fitting using Eulerian-Lagrangian approach

    NASA Astrophysics Data System (ADS)

    Kulkarni, Harshwardhan; Khadamkar, Hrushikesh; Mathpati, Channamallikarjun

    2017-11-01

    Erosion is a wear mechanism of piping system in which wall thinning occurs because of turbulent flow along with along with impact of solid particle on the pipe wall, because of this pipe ruptures causes costly repair of plant and personal injuries. In this study two way coupled Eulerian-Lagrangian approach is used to solve the liquid solid (water-ferrous suspension) flow in the different pipe fitting namely elbow, t-junction, reducer, orifice and 50% open gate valve. Simulations carried out using incomressible transient solver in OpenFOAM for different Reynolds's number (10k, 25k, 50k) and using WenYu drag model to find out possible higher erosion region in pipe fitting. Used transient solver is a hybrid in nature which is combination of Lagrangian library and pimpleFoam. Result obtained from simulation shows that exit region of elbow specially downstream of straight, extradose of the bend section more affected by erosion. Centrifugal force on solid particle at bend affect the erosion behavior. In case of t-junction erosion occurs below the locus of the projection of branch pipe on the wall. For the case of reducer, orifice and a gate valve reduction area as well as downstream is getting more affected by erosion because of increase in velocities.

  17. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  18. The Chaotic Light Curves of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Kazanas, Demosthenes

    2007-01-01

    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.

  19. Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2

    NASA Astrophysics Data System (ADS)

    Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.

    2017-12-01

    In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.

  20. Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm

    PubMed Central

    Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant

    2015-01-01

    In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088

  1. Adolescent Girls' Physical Activity, Fitness and Psychological Well-Being during a Health Club Physical Education Approach

    ERIC Educational Resources Information Center

    McNamee, Jeff; Timken, Gay L.; Coste, Sarah C.; Tompkins, Tanya L.; Peterson, Janet

    2017-01-01

    This pilot project aimed to demonstrate the efficacy and feasibility of an innovative physical education programme, referred to as a health club (HC) approach, in a high school setting. We measured adolescent girls' moderate to vigorous physical activity (MVPA), components of health-related physical fitness, and perceptions about themselves and…

  2. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    NASA Astrophysics Data System (ADS)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  3. Marginal and internal fit of curved anterior CAD/CAM-milled zirconia fixed dental prostheses: an in-vitro study.

    PubMed

    Büchi, Dominik L; Ebler, Sabine; Hämmerle, Christoph H F; Sailer, Irena

    2014-01-01

    To test whether or not different types of CAD/CAM systems, processing zirconia in the densely and in the pre-sintered stage, lead to differences in the accuracy of 4-unit anterior fixed dental prosthesis (FDP) frameworks, and to evaluate the efficiency. 40 curved anterior 4-unit FDP frameworks were manufactured with four different CAD/CAM systems: DCS Precident (DCS) (control group), Cercon (DeguDent) (test group 1), Cerec InLab (Sirona) (test group 2), Kavo Everest (Kavo) (test group 3). The DCS System was chosen as the control group because the zirconia frameworks are processed in its densely sintered stage and there is no shrinkage of the zirconia during the manufacturing process. The initial fit of the frameworks was checked and adjusted to a subjectively similar level of accuracy by one dental technician, and the time taken for this was recorded. After cementation, the frameworks were embedded into resin and the abutment teeth were cut in mesiodistal and orobuccal directions in four specimens. The thickness of the cement gap was measured at 50× (internal adaptation) and 200× (marginal adaptation) magnification. The measurement of the accuracy was performed at four sites. Site 1: marginal adaptation, the marginal opening at the point of closest perpendicular approximation between the die and framework margin. Site 2: Internal adaptation at the chamfer. Site 3: Internal adaptation at the axial wall. Site 4: Internal adaptation in the occlusal area. The data were analyzed descriptively using the ANOVA and Bonferroni/ Dunn tests. The mean marginal adaptation (site 1) of the control group was 107 ± 26 μm; test group 1, 140 ± 26 μm; test group 2, 104 ± 40 μm; and test group 3, 95 ± 31 μm. Test group 1 showed a tendency to exhibit larger marginal gaps than the other groups, however, this difference was only significant when test groups 1 and 3 were compared (P = .0022; Bonferroni/Dunn test). Significantly more time was needed for the adjustment of the

  4. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project.

    PubMed

    Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif

    2017-01-01

    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  5. Short-term corneal changes with gas-permeable contact lens wear in keratoconus subjects: a comparison of two fitting approaches.

    PubMed

    Romero-Jiménez, Miguel; Santodomingo-Rubido, Jacinto; Flores-Rodríguez, Patricia; González-Méijome, Jose-Manuel

    2015-01-01

    To evaluate changes in anterior corneal topography and higher-order aberrations (HOA) after 14-days of rigid gas-permeable (RGP) contact lens (CL) wear in keratoconus subjects comparing two different fitting approaches. Thirty-one keratoconus subjects (50 eyes) without previous history of CL wear were recruited for the study. Subjects were randomly fitted to either an apical-touch or three-point-touch fitting approach. The lens' back optic zone radius (BOZR) was 0.4mm and 0.1mm flatter than the first definite apical clearance lens, respectively. Differences between the baseline and post-CL wear for steepest, flattest and average corneal power (ACP) readings, central corneal astigmatism (CCA), maximum tangential curvature (KTag), anterior corneal surface asphericity, anterior corneal surface HOA and thinnest corneal thickness measured with Pentacam were compared. A statistically significant flattening was found over time on the flattest and steepest simulated keratometry and ACP in apical-touch group (all p<0.01). A statistically significant reduction in KTag was found in both groups after contact lens wear (all p<0.05). Significant reduction was found over time in CCA (p=0.001) and anterior corneal asphericity in both groups (p<0.001). Thickness at the thinnest corneal point increased significantly after CL wear (p<0.0001). Coma-like and total HOA root mean square (RMS) error were significantly reduced following CL wearing in both fitting approaches (all p<0.05). Short-term rigid gas-permeable CL wear flattens the anterior cornea, increases the thinnest corneal thickness and reduces anterior surface HOA in keratoconus subjects. Apical-touch was associated with greater corneal flattening in comparison to three-point-touch lens wear. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  6. Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.

    2006-01-01

    The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.

  7. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  8. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  9. Modeling Latent Growth Curves With Incomplete Data Using Different Types of Structural Equation Modeling and Multilevel Software

    ERIC Educational Resources Information Center

    Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.

    2004-01-01

    This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…

  10. Linking occurrence and fitness to persistence: Habitat-based approach for endangered Greater Sage-Grouse

    USGS Publications Warehouse

    Aldridge, Cameron L.; Boyce, Mark S.

    2007-01-01

    Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel

  11. Flight demonstrations of curved, descending approaches and automatic landings using time referenced scanning beam guidance

    NASA Technical Reports Server (NTRS)

    White, W. F. (Compiler)

    1978-01-01

    The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.

  12. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  13. Self-Interacting Dark Matter Can Explain Diverse Galactic Rotation Curves

    NASA Astrophysics Data System (ADS)

    Kamada, Ayuki; Kaplinghat, Manoj; Pace, Andrew B.; Yu, Hai-Bo

    2017-09-01

    The rotation curves of spiral galaxies exhibit a diversity that has been difficult to understand in the cold dark matter (CDM) paradigm. We show that the self-interacting dark matter (SIDM) model provides excellent fits to the rotation curves of a sample of galaxies with asymptotic velocities in the 25 - 300 km /s range that exemplify the full range of diversity. We assume only the halo concentration-mass relation predicted by the CDM model and a fixed value of the self-interaction cross section. In dark-matter-dominated galaxies, thermalization due to self-interactions creates large cores and reduces dark matter densities. In contrast, thermalization leads to denser and smaller cores in more luminous galaxies and naturally explains the flatness of rotation curves of the highly luminous galaxies at small radii. Our results demonstrate that the impact of the baryons on the SIDM halo profile and the scatter from the assembly history of halos as encoded in the concentration-mass relation can explain the diverse rotation curves of spiral galaxies.

  14. Self-Interacting Dark Matter Can Explain Diverse Galactic Rotation Curves.

    PubMed

    Kamada, Ayuki; Kaplinghat, Manoj; Pace, Andrew B; Yu, Hai-Bo

    2017-09-15

    The rotation curves of spiral galaxies exhibit a diversity that has been difficult to understand in the cold dark matter (CDM) paradigm. We show that the self-interacting dark matter (SIDM) model provides excellent fits to the rotation curves of a sample of galaxies with asymptotic velocities in the 25-300  km/s range that exemplify the full range of diversity. We assume only the halo concentration-mass relation predicted by the CDM model and a fixed value of the self-interaction cross section. In dark-matter-dominated galaxies, thermalization due to self-interactions creates large cores and reduces dark matter densities. In contrast, thermalization leads to denser and smaller cores in more luminous galaxies and naturally explains the flatness of rotation curves of the highly luminous galaxies at small radii. Our results demonstrate that the impact of the baryons on the SIDM halo profile and the scatter from the assembly history of halos as encoded in the concentration-mass relation can explain the diverse rotation curves of spiral galaxies.

  15. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  16. Inference and analysis of xenon outflow curves under multi-pulse injection in two-dimensional chromatography.

    PubMed

    Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan

    2013-10-11

    Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. dftools: Distribution function fitting

    NASA Astrophysics Data System (ADS)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  18. Calculating Galactic Distances Through Supernova Light Curve Analysis (Abstract)

    NASA Astrophysics Data System (ADS)

    Glanzer, J.

    2018-06-01

    (Abstract only) The purpose of this project is to experimentally determine the distance to the galaxy M101 by using data that were taken on the type Ia supernova SN 2011fe at the Paul P. Feder Observatory. Type Ia supernovae are useful for determining distances in astronomy because they all have roughly the same luminosity at the peak of their outburst. Comparing the apparent magnitude to the absolute magnitude allows a measurement of the distance. The absolute magnitude is estimated in two ways: using an empirical relationship from the literature between the rate of decline and the absolute magnitude, and using sncosmo, a PYTHON package used for supernova light curve analysis that fits model light curves to the photometric data.

  19. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  20. INFOS: spectrum fitting software for NMR analysis.

    PubMed

    Smith, Albert A

    2017-02-01

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  1. Charon's light curves, as observed by New Horizons' Ralph color camera (MVIC) on approach to the Pluto system

    NASA Astrophysics Data System (ADS)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; Young, L. A.; Stern, S. A.

    2017-05-01

    Light curves produced from color observations taken during New Horizons' approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5° to 15.1°, sub-observer latitude of 51.2 °N to 51.5 °N, and a sub-solar latitude of 41.2°N. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  2. Charon's Light Curves, as Observed by New Horizons' Ralph Color Camera (MVIC) on Approach to the Pluto System.

    NASA Technical Reports Server (NTRS)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; hide

    2016-01-01

    Light curves produced from color observations taken during New Horizons approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5 degrees to 15.1 degrees, sub-observer latitude of 51.2 degrees North to 51.5 degrees North, and a sub-solar latitude of 41.2 degrees North. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  3. "Asymptotic Parabola" Fits for Smoothing Generally Asymmetric Light Curves

    NASA Astrophysics Data System (ADS)

    Andrych, K. D.; Andronov, I. L.; Chinarova, L. L.; Marsakova, V. I.

    A computer program is introduced, which allows to determine statistically optimal approximation using the "Asymptotic Parabola" fit, or, in other words, the spline consisting of polynomials of order 1,2,1, or two lines ("asymptotes") connected with a parabola. The function itself and its derivative is continuous. There are 5 parameters: two points, where a line switches to a parabola and vice versa, the slopes of the line and the curvature of the parabola. Extreme cases are either the parabola without lines (i.e.the parabola of width of the whole interval), or lines without a parabola (zero width of the parabola), or "line+parabola" without a second line. Such an approximation is especially effective for pulsating variables, for which the slopes of the ascending and descending branches are generally different, so the maxima and minima have asymmetric shapes. The method was initially introduced by Marsakova and Andronov (1996OAP.....9...127M) and realized as a computer program written in QBasic under DOS. It was used for dozens of variable stars, particularly, for the catalogs of the individual characteristics of pulsations of the Mira (1998OAP....11...79M) and semi-regular (200OAP....13..116C) pulsating variables. For the eclipsing variables with nearly symmetric shapes of the minima, we use a "symmetric" version of the "Asymptotic parabola". Here we introduce a Windows-based program, which does not have DOS limitation for the memory (number of observations) and screen resolution. The program has an user-friendly interface and is illustrated by an application to the test signal and to the pulsating variable AC Her.

  4. Object-Image Correspondence for Algebraic Curves under Projections

    NASA Astrophysics Data System (ADS)

    Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon

    2013-03-01

    We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  5. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  6. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies

  7. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.

    PubMed

    Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A

    2013-11-01

    To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  8. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions

    PubMed Central

    Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.

    2013-01-01

    Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441

  9. Empirical velocity profiles for galactic rotation curves

    NASA Astrophysics Data System (ADS)

    López Fune, E.

    2018-04-01

    A unified parametrization of the circular velocity, which accurately fits 850 galaxy rotation curves without needing in advance the knowledge of the luminous matter components, nor a fixed dark matter halo model, is proposed. A notable feature is that the associated gravitational potential increases with the distance from the galaxy centre, giving rise to a length-scale indicating a finite size of a galaxy, and after, the Keplerian fall-off of the parametrized circular velocity is recovered according to Newtonian gravity, making possible the estimation of the total mass enclosed by the galaxy.

  10. The spectra and light curves of two gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Knight, F. K.; Matteson, J. L.; Peterson, L. E.

    1981-01-01

    Observations made by the Hard X-ray and Low Energy Gamma-Ray Experiment on board HEAO-1 of the spectra and light curves of two gamma-ray bursts for which localized arrival directions will become available are presented. The burst of October 20, 1977 is found to exhibit a fluence of 0.000031 + or - 0.000005 erg/sq cm over the energy range 0.135-2.05 MeV and a duration of 38.7 sec, while that of November 10, 1977 is found to have a fluence of 0.000021 + or - 0.000008 erg/sq cm between 0.125 and 3 MeV over 2.8 sec. The light curves of both bursts exhibit time fluctuations down to the limiting time resolution of the detectors. The spectrum of the October burst can be fit by a power law of index -1.93 + or -0.16, which is harder than any other gamma-burst spectrum yet reported. The spectrum of the second burst is softer (index -2.4 + or - 0.7), and is consistent with the upper index in the double power law fit to the burst of April 27, 1972.

  11. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  12. Exploring the optimum step size for defocus curves.

    PubMed

    Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme

    2013-06-01

    To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  13. Student-Institution Fit.

    ERIC Educational Resources Information Center

    Williams, Terry E.

    The concept of student-institution fit in higher education is clarified, and an approach that can be applied to different types of campuses is described. Also considered is the theoretical framework, including the concept of "person-environment interaction." Three sets of factors are important: student characteristics, institutional…

  14. A Clock Fingerprints-Based Approach for Wireless Transmitter Identification

    NASA Astrophysics Data System (ADS)

    Zhao, Caidan; Xie, Liang; Huang, Lianfen; Yao, Yan

    Cognitive radio (CR) was proposed as one of the promising solutions for low spectrum utilization. However, security problems such as the primary user emulation (PUE) attack severely limit its applications. In this paper, we propose a clock fingerprints-based authentication approach to prevent PUE attacks in CR networks with the help of curve fitting and classifier. An experimental setup was constructed using the WLAN cards and software radio devices, and the corresponding results show that satisfied identification can be achieved for wireless transmitters.

  15. Anterior minimally invasive approach for total hip replacement: five-year survivorship and learning curve.

    PubMed

    Müller, Daniel A; Zingg, Patrick O; Dora, Claudio

    2014-01-01

    Opponents associate minimally invasive total hip replacement (THR) with additional risks, potentially resulting in increased implant failure rates. The purpose was to document complications, quality of implant positioning and five-year survivorship of THR using the AMIS approach and to test the hypothesis that eventual high complication and revision rates would be limited to an early series and be avoided by junior surgeons who get trained by a senior surgeon. A consecutive series of 150 primary THR implanted during the introduction of the AMIS technique in the department was retrospectively analysed for complications, implant positioning and implant survival after a minimum of five years. Survivorship curves of implants were compared between different surgeons and time periods. Due to implant revision for any reason the five-year survival rate was 94.6%, 78.9% for the first 20 and 96.8% for the following 130 AMIS procedures (p = 0.001). The hazard ratio for implant failure was 0.979 indicating a risk reduction of 2% every further case. The five-year implant survivorship of those procedures performed by two junior surgeons was 97.7%. We conclude that adoption of AMIS temporarily exposed patients to a higher risk of implant revisions, which normalised after the first 20 cases and that experience from a single surgeon's learning curve could effectively be taught to junior surgeons.

  16. On the derivation of flow rating curves in data-scarce environments

    NASA Astrophysics Data System (ADS)

    Manfreda, Salvatore

    2018-07-01

    River monitoring is a critical issue for hydrological modelling that relies strongly on the use of flow rating curves (FRCs). In most cases, these functions are derived by least-squares fitting which usually leads to good performance indices, even when based on a limited range of data that especially lack high flow observations. In this context, cross-section geometry is a controlling factor which is not fully exploited in classical approaches. In fact, river discharge is obtained as the product of two factors: 1) the area of the wetted cross-section and 2) the cross-sectionally averaged velocity. Both factors can be expressed as a function of the river stage, defining a viable alternative in the derivation of FRCs. This makes it possible to exploit information about cross-section geometry limiting, at least partially, the uncertainty in the extrapolation of discharge at higher flow values. Numerical analyses and field data confirm the reliability of the proposed procedure for the derivation of FRCs.

  17. Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic

    2015-04-01

    Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24

  18. An experimental study of pilots' control characteristics for flight of an STOL aircraft in backside of drag curve at approach and landing.

    PubMed

    Ema, T

    1992-01-01

    In general, most vehicles can be modelled by a multi-variable system which has interactive variables. It can be clearly shown that there is an interactive response in an aircraft's velocity and altitude obtained by stick control and/or throttle control. In particular, if the flight conditions fall to backside of drag curve in the flight of an STOL aircraft at approach and landing then the ratio of drag variation to velocity change has a negative value (delta D/delta u less than 0) and the system of motion presents a non-minimum phase. Therefore, the interaction between velocity and altitude response becomes so complicated that it affects to pilot's control actions and it may be difficult to control the STOL aircraft at approach and landing. In this paper, experimental results of a pilot's ability to control the STOL aircraft are presented for a multi-variable manual control system using a fixed ground base simulator and the pilot's control ability is discussed for the flight of an STOL aircraft at backside of drag curve at approach and landing.

  19. Uncertainties in the interstellar extinction curve and the Cepheid distance to M101

    NASA Astrophysics Data System (ADS)

    Nataf, David M.

    2015-05-01

    I revisit the Cepheid-distance determination to the nearby spiral galaxy M101 (Pinwheel Galaxy) of Shappee & Stanek, in light of several recent investigations questioning the shape of the interstellar extinction curve at λ ≈ 8000 Å (i.e. the I band). I find that the relatively steep extinction ratio AI/E(V - I) = 1.1450 from Fitzpatrick & Massa is slightly favoured relative to AI/E(V - I) = 1.2899 from Fitzpatrick and significantly favoured relative the historically canonical value of AI/E(V - I) = 1.4695, from Cardelli et al. The steeper extinction curves, with lower values of AI/E(V - I), yield fits with reduced scatter, metallicity dependences to the dereddened Cepheid luminosities that are closer to values inferred in the Local Group, and that are less sensitive to the choice of reddening cut imposed in the sample selection. The increase in distance modulus to M101 when using the preferred extinction curve is Δμ ˜ 0.06 mag, resulting in the estimate of the distance modulus to M101 relative to the LMC is ΔμLMC ≈ 10.72 ± 0.03 (stat). The best-fitting metallicity dependence is dMI/d{[O/H]} ≈ (-0.38 ± 0.14 (stat)) mag dex-1.

  20. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  1. Mouse Curve Biometrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Douglas A.

    2007-10-08

    A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classied and used to develop classication histograms, which are representative of an individual's typical mouse use. These classication histograms can then be compared to validate identity. This classication approach is suitable for providing continuous identity validation during an entire user session.

  2. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  3. AN ANALYSIS OF THE SHAPES OF INTERSTELLAR EXTINCTION CURVES. VI. THE NEAR-IR EXTINCTION LAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitzpatrick, E. L.; Massa, D.

    We combine new observations from the Hubble Space Telescope's Advanced Camera of Survey with existing data to investigate the wavelength dependence of near-IR (NIR) extinction. Previous studies suggest a power law form for NIR extinction, with a 'universal' value of the exponent, although some recent observations indicate that significant sight line-to-sight line variability may exist. We show that a power-law model for the NIR extinction provides an excellent fit to most extinction curves, but that the value of the power, {beta}, varies significantly from sight line to sight line. Therefore, it seems that a 'universal NIR extinction law' is notmore » possible. Instead, we find that as {beta} decreases, R(V) {identical_to} A(V)/E(B - V) tends to increase, suggesting that NIR extinction curves which have been considered 'peculiar' may, in fact, be typical for different R(V) values. We show that the power-law parameters can depend on the wavelength interval used to derive them, with the {beta} increasing as longer wavelengths are included. This result implies that extrapolating power-law fits to determine R(V) is unreliable. To avoid this problem, we adopt a different functional form for NIR extinction. This new form mimics a power law whose exponent increases with wavelength, has only two free parameters, can fit all of our curves over a longer wavelength baseline and to higher precision, and produces R(V) values which are consistent with independent estimates and commonly used methods for estimating R(V). Furthermore, unlike the power-law model, it gives R(V)s that are independent of the wavelength interval used to derive them. It also suggests that the relation R(V) = -1.36 E(K-V)/(E(B-V)) - 0.79 can estimate R(V) to {+-}0.12. Finally, we use model extinction curves to show that our extinction curves are in accord with theoretical expectations, and demonstrate how large samples of observational quantities can provide useful constraints on the grain properties.« less

  4. Combined approaches to flexible fitting and assessment in virus capsids undergoing conformational change☆

    PubMed Central

    Pandurangan, Arun Prasad; Shakeel, Shabih; Butcher, Sarah Jane; Topf, Maya

    2014-01-01

    Fitting of atomic components into electron cryo-microscopy (cryoEM) density maps is routinely used to understand the structure and function of macromolecular machines. Many fitting methods have been developed, but a standard protocol for successful fitting and assessment of fitted models has yet to be agreed upon among the experts in the field. Here, we created and tested a protocol that highlights important issues related to homology modelling, density map segmentation, rigid and flexible fitting, as well as the assessment of fits. As part of it, we use two different flexible fitting methods (Flex-EM and iMODfit) and demonstrate how combining the analysis of multiple fits and model assessment could result in an improved model. The protocol is applied to the case of the mature and empty capsids of Coxsackievirus A7 (CAV7) by flexibly fitting homology models into the corresponding cryoEM density maps at 8.2 and 6.1 Å resolution. As a result, and due to the improved homology models (derived from recently solved crystal structures of a close homolog – EV71 capsid – in mature and empty forms), the final models present an improvement over previously published models. In close agreement with the capsid expansion observed in the EV71 structures, the new CAV7 models reveal that the expansion is accompanied by ∼5° counterclockwise rotation of the asymmetric unit, predominantly contributed by the capsid protein VP1. The protocol could be applied not only to viral capsids but also to many other complexes characterised by a combination of atomic structure modelling and cryoEM density fitting. PMID:24333899

  5. EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Matthew; France, Kevin; Schindhelm, Eric

    Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curvesmore » are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.« less

  6. Calculated mammographic spectra confirmed with attenuation curves for molybdenum, rhodium, and tungsten targets.

    PubMed

    Blough, M M; Waggener, R G; Payne, W H; Terry, J A

    1998-09-01

    A model for calculating mammographic spectra independent of measured data and fitting parameters is presented. This model is based on first principles. Spectra were calculated using various target and filter combinations such as molybdenum/molybdenum, molybdenum/rhodium, rhodium/rhodium, and tungsten/aluminum. Once the spectra were calculated, attenuation curves were calculated and compared to measured attenuation curves. The attenuation curves were calculated and measured using aluminum alloy 1100 or high purity aluminum filtration. Percent differences were computed between the measured and calculated attenuation curves resulting in an average of 5.21% difference for tungsten/aluminum, 2.26% for molybdenum/molybdenum, 3.35% for rhodium/rhodium, and 3.18% for molybdenum/rhodium. Calculated spectra were also compared to measured spectra from the Food and Drug Administration [Fewell and Shuping, Handbook of Mammographic X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1979)] and a comparison will also be presented.

  7. Is High-Intensity Functional Training (HIFT)/CrossFit Safe for Military Fitness Training?

    PubMed

    Poston, Walker S C; Haddock, Christopher K; Heinrich, Katie M; Jahnke, Sara A; Jitnarin, Nattinee; Batchelor, David B

    2016-07-01

    High-intensity functional training (HIFT) is a promising fitness paradigm that gained popularity among military populations. Rather than biasing workouts toward maximizing fitness domains such as aerobic endurance, HIFT workouts are designed to promote general physical preparedness. HIFT programs have proliferated as a result of concerns about the relevance of traditional physical training (PT), which historically focused on aerobic condition via running. Other concerns about traditional PT include: (1) the relevance of service fitness tests given current combat demands, (2) the perception that military PT is geared toward passing service fitness tests, and (3) that training for combat requires more than just aerobic endurance. Despite its' popularity in the military, concerns have been raised about HIFT's injury potential, leading to some approaches being labeled as "extreme conditioning programs" by several military and civilian experts. Given HIFT programs' popularity in the military and concerns about injury, a review of data on HIFT injury potential is needed to inform military policy. The purpose of this review is to: (1) provide an overview of scientific methods used to appropriately compare injury rates among fitness activities and (2) evaluate scientific data regarding HIFT injury risk compared to traditional military PT and other accepted fitness activities. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  8. A principled approach to setting optimal diagnostic thresholds: where ROC and indifference curves meet.

    PubMed

    Irwin, R John; Irwin, Timothy C

    2011-06-01

    Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  9. Magnetism in curved geometries

    DOE PAGES

    Streubel, Robert; Fischer, Peter; Kronast, Florian; ...

    2016-08-17

    Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. Asmore » a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.« less

  10. Magnetism in curved geometries

    NASA Astrophysics Data System (ADS)

    Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys

    2016-09-01

    Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii-Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. These recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.

  11. Magnetism in curved geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streubel, Robert; Fischer, Peter; Kronast, Florian

    Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. Asmore » a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.« less

  12. Real-time multi-channel stimulus artifact suppression by local curve fitting.

    PubMed

    Wagenaar, Daniel A; Potter, Steve M

    2002-10-30

    We describe an algorithm for suppression of stimulation artifacts in extracellular micro-electrode array (MEA) recordings. A model of the artifact based on locally fitted cubic polynomials is subtracted from the recording, yielding a flat baseline amenable to spike detection by voltage thresholding. The algorithm, SALPA, reduces the period after stimulation during which action potentials cannot be detected by an order of magnitude, to less than 2 ms. Our implementation is fast enough to process 60-channel data sampled at 25 kHz in real-time on an inexpensive desktop PC. It performs well on a wide range of artifact shapes without re-tuning any parameters, because it accounts for amplifier saturation explicitly and uses a statistic to verify successful artifact suppression immediately after the amplifiers become operational. We demonstrate the algorithm's effectiveness on recordings from dense monolayer cultures of cortical neurons obtained from rat embryos. SALPA opens up a previously inaccessible window for studying transient neural oscillations and precisely timed dynamics in short-latency responses to electric stimulation. Copyright 2002 Elsevier Science B.V.

  13. COMPARING BEHAVIORAL DOSE-EFFECT CURVES FOR HUMANS AND LABORATORY ANIMALS ACUTELY EXPOSED TO TOLUENE.

    EPA Science Inventory

    The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...

  14. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pejcha, Ondřej; Prieto, Jose L., E-mail: pejcha@astro.princeton.edu

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles resultmore » in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.« less

  15. A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesar, Branimir; Fouesneau, Morgan; Bailer-Jones, Coryn A. L.

    Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W 1 and W 2 bands, using Tycho- Gaia Astrometric Solution (TGAS) parallaxmore » measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1 σ ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.« less

  16. Benefit and cost curves for typical pollination mutualisms.

    PubMed

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  17. Estimating HIV-1 Fitness Characteristics from Cross-Sectional Genotype Data

    PubMed Central

    Gopalakrishnan, Sathej; Montazeri, Hesam; Menz, Stephan; Beerenwinkel, Niko; Huisinga, Wilhelm

    2014-01-01

    Despite the success of highly active antiretroviral therapy (HAART) in the management of human immunodeficiency virus (HIV)-1 infection, virological failure due to drug resistance development remains a major challenge. Resistant mutants display reduced drug susceptibilities, but in the absence of drug, they generally have a lower fitness than the wild type, owing to a mutation-incurred cost. The interaction between these fitness costs and drug resistance dictates the appearance of mutants and influences viral suppression and therapeutic success. Assessing in vivo viral fitness is a challenging task and yet one that has significant clinical relevance. Here, we present a new computational modelling approach for estimating viral fitness that relies on common sparse cross-sectional clinical data by combining statistical approaches to learn drug-specific mutational pathways and resistance factors with viral dynamics models to represent the host-virus interaction and actions of drug mechanistically. We estimate in vivo fitness characteristics of mutant genotypes for two antiretroviral drugs, the reverse transcriptase inhibitor zidovudine (ZDV) and the protease inhibitor indinavir (IDV). Well-known features of HIV-1 fitness landscapes are recovered, both in the absence and presence of drugs. We quantify the complex interplay between fitness costs and resistance by computing selective advantages for different mutants. Our approach extends naturally to multiple drugs and we illustrate this by simulating a dual therapy with ZDV and IDV to assess therapy failure. The combined statistical and dynamical modelling approach may help in dissecting the effects of fitness costs and resistance with the ultimate aim of assisting the choice of salvage therapies after treatment failure. PMID:25375675

  18. The Very Essentials of Fitness for Trial Assessment in Canada

    ERIC Educational Resources Information Center

    Newby, Diana; Faltin, Robert

    2008-01-01

    Fitness for trial constitutes the most frequent referral to forensic assessment services. Several approaches to this evaluation exist in Canada, including the Fitness Interview Test and Basic Fitness for Trial Test. The following article presents a review of the issues and a method for basic fitness for trial evaluation.

  19. Robotic Mitral Valve Repair: The Learning Curve.

    PubMed

    Goodman, Avi; Koprivanac, Marijan; Kelava, Marta; Mick, Stephanie L; Gillinov, A Marc; Rajeswaran, Jeevanantham; Brzezinski, Anna; Blackstone, Eugene H; Mihaljevic, Tomislav

    Adoption of robotic mitral valve surgery has been slow, likely in part because of its perceived technical complexity and a poorly understood learning curve. We sought to correlate changes in technical performance and outcome with surgeon experience in the "learning curve" part of our series. From 2006 to 2011, two surgeons undertook robotically assisted mitral valve repair in 458 patients (intent-to-treat); 404 procedures were completed entirely robotically (as-treated). Learning curves were constructed by modeling surgical sequence number semiparametrically with flexible penalized spline smoothing best-fit curves. Operative efficiency, reflecting technical performance, improved for (1) operating room time for case 1 to cases 200 (early experience) and 400 (later experience), from 414 to 364 to 321 minutes (12% and 22% decrease, respectively), (2) cardiopulmonary bypass time, from 148 to 102 to 91 minutes (31% and 39% decrease), and (3) myocardial ischemic time, from 119 to 75 to 68 minutes (37% and 43% decrease). Composite postoperative complications, reflecting safety, decreased from 17% to 6% to 2% (63% and 85% decrease). Intensive care unit stay decreased from 32 to 28 to 24 hours (13% and 25% decrease). Postoperative stay fell from 5.2 to 4.5 to 3.8 days (13% and 27% decrease). There were no in-hospital deaths. Predischarge mitral regurgitation of less than 2+, reflecting effectiveness, was achieved in 395 (97.8%), without correlation to experience; return-to-work times did not change substantially with experience. Technical efficiency of robotic mitral valve repair improves with experience and permits its safe and effective conduct.

  20. A LIGHT CURVE ANALYSIS OF CLASSICAL NOVAE: FREE-FREE EMISSION VERSUS PHOTOSPHERIC EMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hachisu, Izumi; Kato, Mariko, E-mail: hachisu@ea.c.u-tokyo.ac.jp, E-mail: mariko@educ.cc.keio.ac.jp

    2015-01-10

    We analyzed light curves of seven relatively slower novae, PW Vul, V705 Cas, GQ Mus, RR Pic, V5558 Sgr, HR Del, and V723 Cas, based on an optically thick wind theory of nova outbursts. For fast novae, free-free emission dominates the spectrum in optical bands rather than photospheric emission, and nova optical light curves follow the universal decline law. Faster novae blow stronger winds with larger mass-loss rates. Because the brightness of free-free emission depends directly on the wind mass-loss rate, faster novae show brighter optical maxima. In slower novae, however, we must take into account photospheric emission because of theirmore » lower wind mass-loss rates. We calculated three model light curves of free-free emission, photospheric emission, and their sum for various white dwarf (WD) masses with various chemical compositions of their envelopes and fitted reasonably with observational data of optical, near-IR (NIR), and UV bands. From light curve fittings of the seven novae, we estimated their absolute magnitudes, distances, and WD masses. In PW Vul and V705 Cas, free-free emission still dominates the spectrum in the optical and NIR bands. In the very slow novae, RR Pic, V5558 Sgr, HR Del, and V723 Cas, photospheric emission dominates the spectrum rather than free-free emission, which makes a deviation from the universal decline law. We have confirmed that the absolute brightnesses of our model light curves are consistent with the distance moduli of four classical novae with known distances (GK Per, V603 Aql, RR Pic, and DQ Her). We also discussed the reason why the very slow novae are about ∼1 mag brighter than the proposed maximum magnitude versus rate of decline relation.« less

  1. Deriving the suction stress of unsaturated soils from water retention curve, based on wetted surface area in pores

    NASA Astrophysics Data System (ADS)

    Greco, Roberto; Gargano, Rudy

    2016-04-01

    The evaluation of suction stress in unsaturated soils has important implications in several practical applications. Suction stress affects soil aggregate stability and soil erosion. Furthermore, the equilibrium of shallow unsaturated soil deposits along steep slopes is often possible only thanks to the contribution of suction to soil effective stress. Experimental evidence, as well as theoretical arguments, shows that suction stress is a nonlinear function of matric suction. The relationship expressing the dependence of suction stress on soil matric suction is usually indicated as Soil Stress Characteristic Curve (SSCC). In this study, a novel equation for the evaluation of the suction stress of an unsaturated soil is proposed, assuming that the exchange of stress between soil water and solid particles occurs only through the part of the surface of the solid particles which is in direct contact with water. The proposed equation, based only upon geometric considerations related to soil pore-size distribution, allows to easily derive the SSCC from the water retention curve (SWRC), with the assignment of two additional parameters. The first parameter, representing the projection of the external surface area of the soil over a generic plane surface, can be reasonably estimated from the residual water content of the soil. The second parameter, indicated as H0, is the water potential, below which adsorption significantly contributes to water retention. For the experimental verification of the proposed approach such a parameter is considered as a fitting parameter. The proposed equation is applied to the interpretation of suction stress experimental data, taken from the literature, spanning over a wide range of soil textures. The obtained results show that in all cases the proposed relationships closely reproduces the experimental data, performing better than other currently used expressions. The obtained results also show that the adopted values of the parameter H0

  2. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  3. Interpretation of psychophysics response curves using statistical physics.

    PubMed

    Knani, S; Khalfaoui, M; Hachicha, M A; Mathlouthi, M; Ben Lamine, A

    2014-05-15

    Experimental gustatory curves have been fitted for four sugars (sucrose, fructose, glucose and maltitol), using a double layer adsorption model. Three parameters of the model are fitted, namely the number of molecules per site n, the maximum response RM and the concentration at half saturation C1/2. The behaviours of these parameters are discussed in relationship to each molecule's characteristics. Starting from the double layer adsorption model, we determined (in addition) the adsorption energy of each molecule on taste receptor sites. The use of the threshold expression allowed us to gain information about the adsorption occupation rate of a receptor site which fires a minimal response at a gustatory nerve. Finally, by means of this model we could calculate the configurational entropy of the adsorption system, which can describe the order and disorder of the adsorbent surface. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. VLT/X-shooter GRBs: Individual extinction curves of star-forming regions★

    NASA Astrophysics Data System (ADS)

    Zafar, T.; Watson, D.; Møller, P.; Selsing, J.; Fynbo, J. PU; Schady, P.; Wiersema, K.; Levan, A. J.; Heintz, K. E.; Postigo, A. de Ugarte; D'Elia, V.; Jakobsson, P.; Bolmer, J.; Japelj, J.; Covino, S.; Gomboc, A.; Cano, Z.

    2018-05-01

    The extinction profiles in Gamma-Ray Burst (GRB) afterglow spectral energy distributions (SEDs) are usually described by the Small Magellanic Cloud (SMC)-type extinction curve. In different empirical extinction laws, the total-to-selective extinction, RV, is an important quantity because of its relation to dust grain sizes and compositions. We here analyse a sample of 17 GRBs (0.34fit individual extinction curves of GRBs for the first time. Our sample is compiled on the basis that multi-band photometry is available around the X-shooter observations. The X-shooter data are combined with the Swift X-ray data and a single or broken power-law together with a parametric extinction law is used to model the individual SEDs. We find 10 cases with significant dust, where the derived extinction, AV, ranges from 0.1-1.0 mag. In four of those, the inferred extinction curves are consistent with the SMC curve. The GRB individual extinction curves have a flat RV distribution with an optimal weighted combined value of RV = 2.61 ± 0.08 (for seven broad coverage cases). The `average GRB extinction curve' is similar to, but slightly steeper than the typical SMC, and consistent with the SMC Bar extinction curve at ˜95% confidence level. The resultant steeper extinction curves imply populations of small grains, where large dust grains may be destroyed due to GRB activity. Another possibility could be that young age and/or lower metallicities of GRBs environments are responsible for the steeper curves.

  5. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less

  6. Use of regionalisation approach to develop fire frequency curves for Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Khastagir, Anirban; Jayasuriya, Niranjali; Bhuyian, Muhammed A.

    2017-11-01

    It is important to perform fire frequency analysis to obtain fire frequency curves (FFC) based on fire intensity at different parts of Victoria. In this paper fire frequency curves (FFCs) were derived based on forest fire danger index (FFDI). FFDI is a measure related to fire initiation, spreading speed and containment difficulty. The mean temperature (T), relative humidity (RH) and areal extent of open water (LC2) during summer months (Dec-Feb) were identified as the most important parameters for assessing the risk of occurrence of bushfire. Based on these parameters, Andrews' curve equation was applied to 40 selected meteorological stations to identify homogenous stations to form unique clusters. A methodology using peak FFDI from cluster averaged FFDIs was developed by applying Log Pearson Type III (LPIII) distribution to generate FFCs. A total of nine homogeneous clusters across Victoria were identified, and subsequently their FFC's were developed in order to estimate the regionalised fire occurrence characteristics.

  7. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component

  8. Characterizing core-periphery structure of complex network by h-core and fingerprint curve

    NASA Astrophysics Data System (ADS)

    Li, Simon S.; Ye, Adam Y.; Qi, Eric P.; Stanley, H. Eugene; Ye, Fred Y.

    2018-02-01

    It is proposed that the core-periphery structure of complex networks can be simulated by h-cores and fingerprint curves. While the features of core structure are characterized by h-core, the features of periphery structure are visualized by rose or spiral curve as the fingerprint curve linking to entire-network parameters. It is suggested that a complex network can be approached by h-core and rose curves as the first-order Fourier-approach, where the core-periphery structure is characterized by five parameters: network h-index, network radius, degree power, network density and average clustering coefficient. The simulation looks Fourier-like analysis.

  9. Aerobic Fitness for the Severely and Profoundly Mentally Retarded.

    ERIC Educational Resources Information Center

    Bauer, Dan

    1981-01-01

    The booklet discusses the aerobic fitness capacities of severely/profoundly retarded students and discusses approaches for improving their fitness. An initial section describes a method for determining the student's present fitness level on the basis of computations of height, weight, blood pressure, resting pulse, and Barach Index and Crampton…

  10. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1980-01-01

    Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.

  12. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1981-01-01

    Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.

  13. Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.

    ERIC Educational Resources Information Center

    Buchanan, Trey

    Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…

  14. On Correlated-noise Analyses Applied to Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison

    2017-01-01

    Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.

  15. A novel approach to determine primary stability of acetabular press-fit cups.

    PubMed

    Weißmann, Volker; Boss, Christian; Bader, Rainer; Hansmann, Harald

    2018-04-01

    Today hip cups are used in a large variety of design variants and in increasing numbers of units. Their development is steadily progressing. In addition to conventional manufacturing methods for hip cups, additive methods, in particular, play an increasingly important role as development progresses. The present paper describes a modified cup model developed based on a commercially available press-fit cup (Allofit 54/JJ). The press-fit cup was designed in two variants and manufactured using selective laser melting (SLM). Variant 1 (Ti) was modeled on the Allofit cup using an adapted process technology. Variant 2 (Ti-S) was provided with a porous load bearing structure on its surface. In addition to the typical (complete) geometry, both variants were also manufactured and tested in a reduced shape where only the press-fit area was formed. To assess the primary stability of the press-fit cups in the artificial bone cavity, pull-out and lever-out tests were carried out. Exact fit conditions and two-millimeter press-fit were investigated. The closed-cell PU foam used as an artificial bone cavity was mechanically characterized to exclude any influence on the results of the investigation. The pull-out forces of the Ti-variant (complete-526 N, reduced-468 N) and the Ti-S variant (complete-548 N, reduced-526 N) as well as the lever-out moments of the Ti-variant (complete-10 Nm, reduced-9.8 Nm) and the Ti-S variant (complete-9 Nm, reduced-7.9 N) show no significant differences in the results between complete and reduced cups. The results show that the use of reduced cups in a press-fit design is possible within the scope of development work. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Temporal issues in person-organization fit, person-job fit and turnover: The role of leader-member exchange.

    PubMed

    Boon, Corine; Biron, Michal

    2016-12-01

    Person-environment fit has been found to have significant implications for employee attitudes and behaviors. Most research to date has approached person-environment fit as a static phenomenon, and without examining how different types of person-environment fit may affect each other. In particular, little is known about the conditions under which fit with one aspect of the environment influences another aspect, as well as subsequent behavior. To address this gap we examine the role of leader-member exchange in the relationship between two types of person-environment fit over time: person-organization and person-job fit, and subsequent turnover. Using data from two waves (T1 and T2, respectively) and turnover data collected two years later (T3) from a sample of 160 employees working in an elderly care organization in the Netherlands, we find that person-organization fit at T1 is positively associated with person-job fit at T2, but only for employees in high-quality leader-member exchange relationships. Higher needs-supplies fit at T2 is associated with lower turnover at T3. In contrast, among employees in high-quality leader-member exchange relationships, the demands-abilities dimension of person-job fit at T2 is associated with higher turnover at T3.

  17. Temporal issues in person–organization fit, person–job fit and turnover: The role of leader–member exchange

    PubMed Central

    Boon, Corine; Biron, Michal

    2016-01-01

    Person–environment fit has been found to have significant implications for employee attitudes and behaviors. Most research to date has approached person–environment fit as a static phenomenon, and without examining how different types of person–environment fit may affect each other. In particular, little is known about the conditions under which fit with one aspect of the environment influences another aspect, as well as subsequent behavior. To address this gap we examine the role of leader–member exchange in the relationship between two types of person–environment fit over time: person–organization and person–job fit, and subsequent turnover. Using data from two waves (T1 and T2, respectively) and turnover data collected two years later (T3) from a sample of 160 employees working in an elderly care organization in the Netherlands, we find that person–organization fit at T1 is positively associated with person–job fit at T2, but only for employees in high-quality leader–member exchange relationships. Higher needs–supplies fit at T2 is associated with lower turnover at T3. In contrast, among employees in high-quality leader–member exchange relationships, the demands–abilities dimension of person–job fit at T2 is associated with higher turnover at T3. PMID:27904171

  18. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  19. Effect of driving experience on anticipatory look-ahead fixations in real curve driving.

    PubMed

    Lehtonen, Esko; Lappi, Otto; Koirikivi, Iivo; Summala, Heikki

    2014-09-01

    Anticipatory skills are a potential factor for novice drivers' curve accidents. Behavioural data show that steering and speed regulation are affected by forward planning of the trajectory. When approaching a curve, the relevant visual information for online steering control and for planning is located at different eccentricities, creating a need to disengage the gaze from the guidance of steering to anticipatory look-ahead fixations over curves. With experience, peripheral vision can be increasingly used in the visual guidance of steering. This could leave experienced drivers more gaze time to invest on look-ahead fixations over curves, facilitating the trajectory planning. Eighteen drivers (nine novices, nine experienced) drove an instrumented vehicle on a rural road four times in both directions. Their eye movements were analyzed in six curves. The trajectory of the car was modelled and divided to approach, entry and exit phases. Experienced drivers spent less time on the road-ahead and more time on the look-ahead fixations over the curves. Look-ahead fixations were also more common in the approach than in the entry phase of the curve. The results suggest that with experience drivers allocate greater part of their visual attention to trajectory planning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Beyond the Factor of Safety: Developing Fragility Curves to Characterize System Reliability

    DTIC Science & Technology

    2010-07-01

    increasingly common compo- nents of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are...curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in...and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising

  1. On representing the prognostic value of continuous gene expression biomarkers with the restricted mean survival curve.

    PubMed

    Eng, Kevin H; Schiller, Emily; Morrell, Kayla

    2015-11-03

    Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.

  2. SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Yang, K

    2016-06-15

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering

  3. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering

  4. Is High Intensity Functional Training (HIFT)/CrossFit® Safe for Military Fitness Training?

    PubMed Central

    Poston, Walker S.C.; Haddock, Christopher K.; Heinrich, Katie M.; Jahnke, Sara A.; Jitnarin, Nattinee; Batchelor, David B.

    2016-01-01

    High-intensity functional training (HIFT) is a promising fitness paradigm that gained popularity among military populations. Rather than biasing workouts toward maximizing fitness domains such as aerobic endurance, HIFT workouts are designed to promote general physical preparedness. HIFT programs have proliferated due to concerns about the relevance of traditional physical training (PT), which historically focused on aerobic condition via running. Other concerns about traditional PT include: 1) the relevance of service fitness tests given current combat demands; 2) the perception that military PT is geared toward passing service fitness tests; and 3) that training for combat requires more than just aerobic endurance. Despite its’ popularity in the military, concerns have been raised about HIFT’s injury potential, leading to some approaches being labeled as “extreme conditioning programs” by several military and civilian experts. Given HIFT programs’ popularity in the military and concerns about injury, a review of data on HIFT injury potential is needed to inform military policy. The purpose of this review is to: 1) provide an overview of scientific methods used to appropriately compare injury rates among fitness activities; and 2) evaluate scientific data regarding HIFT injury risk compared to traditional military PT and other accepted fitness activities PMID:27391615

  5. The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling

    NASA Astrophysics Data System (ADS)

    van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.

    2017-12-01

    The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.

  6. Tracing Personalized Health Curves during Infections

    PubMed Central

    Schneider, David S.

    2011-01-01

    It is difficult to describe host–microbe interactions in a manner that deals well with both pathogens and mutualists. Perhaps a way can be found using an ecological definition of tolerance, where tolerance is defined as the dose response curve of health versus parasite load. To plot tolerance, individual infections are summarized by reporting the maximum parasite load and the minimum health for a population of infected individuals and the slope of the resulting curve defines the tolerance of the population. We can borrow this method of plotting health versus microbe load in a population and make it apply to individuals; instead of plotting just one point that summarizes an infection in an individual, we can plot the values at many time points over the course of an infection for one individual. This produces curves that trace the course of an infection through phase space rather than over a more typical timeline. These curves highlight relationships like recovery and point out bifurcations that are difficult to visualize with standard plotting techniques. Only nine archetypical curves are needed to describe most pathogenic and mutualistic host–microbe interactions. The technique holds promise as both a qualitative and quantitative approach to dissect host–microbe interactions of all kinds. PMID:21957398

  7. Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars

    NASA Astrophysics Data System (ADS)

    Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.

    2015-03-01

    In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves for 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For all

  8. Light-curve modelling constraints on the obliquities and aspect angles of the young Fermi pulsars

    DOE PAGES

    Pierbattista, M.; Harding, A. K.; Grenier, I. A.; ...

    2015-02-10

    In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed γ-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity α and of the line of sight angle ζ, yielding estimates of the radiation beaming factor and radiated luminosity. Using different γ-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit γ-ray light curves formore » 76 young or middle-aged pulsars and we jointly fit their γ-ray plus radio light curves when possible. We find that a joint radio plus γ-ray fit strategy is important to obtain (α,ζ) estimates that can explain simultaneously detectable radio and γ-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (α,ζ) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the γ-ray only fit leads to underestimated α or ζ when the solution is found to the left or to the right of the main α-ζ plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favoured in explaining the observations. We find no apparent evolution of α on a time scale of 106 years. For all emission geometries our derived γ-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from radio-quiet to radio-loud solutions. For

  9. Light-Curve Modelling Constraints on the Obliquities and Aspect Angles of the Young Fermi Pulsars

    NASA Technical Reports Server (NTRS)

    Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.

    2015-01-01

    In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed gamma-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity alpha and of the line of sight angle zeta, yielding estimates of the radiation beaming factor and radiated luminosity. Using different gamma-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit gamma-ray light curves for 76 young or middle-aged pulsars and we jointly fit their gamma-ray plus radio light curves when possible. We find that a joint radio plus gamma-ray fit strategy is important to obtain (alpha, zeta) estimates that can explain simultaneously detectable radio and gamma-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (alpha, gamma) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the gamma-ray only fit leads to underestimated alpha or zeta when the solution is found to the left or to the right of the main alpha-zeta plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favored in explaining the observations. We find no apparent evolution of a on a time scale of 106 years. For all emission geometries our derived gamma-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from

  10. Universal approach to analysis of cavitation and liquid-impingement erosion data

    NASA Technical Reports Server (NTRS)

    Rao, P. V.; Young, S. G.

    1982-01-01

    Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.

  11. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    PubMed

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  12. VRF ("Visual RobFit") — nuclear spectral analysis with non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Lasche, George; Coldwell, Robert; Metzger, Robert

    2017-09-01

    A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.

  13. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  14. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance.

    ERIC Educational Resources Information Center

    Cheung, Gordon W.; Rensvold, Roger B.

    2002-01-01

    Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…

  15. Cardiorespiratory fitness and ideal cardiovascular health in European adolescents.

    PubMed

    Ruiz, Jonatan R; Huybrechts, Inge; Cuenca-García, Magdalena; Artero, Enrique G; Labayen, Idoia; Meirhaeghe, Aline; Vicente-Rodriguez, German; Polito, Angela; Manios, Yannis; González-Gross, Marcela; Marcos, Ascensión; Widhalm, Kurt; Molnar, Denes; Kafatos, Anthony; Sjöström, Michael; Moreno, Luis A; Castillo, Manuel J; Ortega, Francisco B

    2015-05-15

    We studied in European adolescents (i) the association between cardiorespiratory fitness and ideal cardiovascular health as defined by the American Heart Association and (ii) whether there is a cardiorespiratory fitness threshold associated with a more favourable cardiovascular health profile. Participants included 510 (n=259 girls) adolescents from 9 European countries. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. Ideal cardiovascular health was defined as meeting ideal levels of the following components: four behaviours (smoking, body mass index, physical activity and diet) and three factors (total cholesterol, blood pressure and glucose). Higher levels of cardiorespiratory fitness were associated with a higher number of ideal cardiovascular health components in both boys and girls (both p for trend ≤0.001). Levels of cardiorespiratory fitness were significantly higher in adolescents meeting at least four ideal components (13% higher in boys, p<0.001; 6% higher in girls, p=0.008). Receiver operating characteristic curve analyses showed a significant discriminating accuracy of cardiorespiratory fitness in identifying the presence of at least four ideal cardiovascular health components (43.8 mL/kg/min in boys and 34.6 mL/kg/min in girls, both p<0.001). The results suggest a hypothetical cardiorespiratory fitness level associated with a healthier cardiovascular profile in adolescents. The fitness standards could be used in schools as part of surveillance and/or screening systems to identify youth with poor health behaviours who might benefit from intervention programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. What Current Research Tells Us About Physical Fitness for Children.

    ERIC Educational Resources Information Center

    Cundiff, David E.

    The author distinguishes between the terms "physical fitness" and "motor performance," summarizes the health and physical status of adults, surveys the physical fitness status of children, and proposes a lifestyle approach to the development and lifetime maintenance of health and physical fitness. The distinctions between…

  17. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    PubMed

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  18. Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach

    PubMed Central

    Hawkley, Louise C.; Capitanio, John P.

    2015-01-01

    Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship—as indexed by perceived social isolation (i.e. loneliness)—with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary–adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a ‘lonely’ phenotype in social species, and its consequences for health and fitness. PMID:25870400

  19. Anatomical curve identification

    PubMed Central

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  20. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  1. Comparison of two correlated ROC curves at a given specificity or sensitivity level

    PubMed Central

    Bantis, Leonidas E.; Feng, Ziding

    2017-01-01

    The receiver operating characteristic (ROC) curve is the most popular statistical tool for evaluating the discriminatory capability of a given continuous biomarker. The need to compare two correlated ROC curves arises when individuals are measured with two biomarkers, which induces paired and thus correlated measurements. Many researchers have focused on comparing two correlated ROC curves in terms of the area under the curve (AUC), which summarizes the overall performance of the marker. However, particular values of specificity may be of interest. We focus on comparing two correlated ROC curves at a given specificity level. We propose parametric approaches, transformations to normality, and nonparametric kernel-based approaches. Our methods can be straightforwardly extended for inference in terms of ROC−1(t). This is of particular interest for comparing the accuracy of two correlated biomarkers at a given sensitivity level. Extensions also involve inference for the AUC and accommodating covariates. We evaluate the robustness of our techniques through simulations, compare to other known approaches and present a real data application involving prostate cancer screening. PMID:27324068

  2. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  3. Growthcurver: an R package for obtaining interpretable metrics from microbial growth curves.

    PubMed

    Sprouffske, Kathleen; Wagner, Andreas

    2016-04-19

    Plate readers can measure the growth curves of many microbial strains in a high-throughput fashion. The hundreds of absorbance readings collected simultaneously for hundreds of samples create technical hurdles for data analysis. Growthcurver summarizes the growth characteristics of microbial growth curve experiments conducted in a plate reader. The data are fitted to a standard form of the logistic equation, and the parameters have clear interpretations on population-level characteristics, like doubling time, carrying capacity, and growth rate. Growthcurver is an easy-to-use R package available for installation from the Comprehensive R Archive Network (CRAN). The source code is available under the GNU General Public License and can be obtained from Github (Sprouffske K, Growthcurver sourcecode, 2016).

  4. Applying Biometric Growth Curve Models to Developmental Synchronies in Cognitive Development: The Louisville Twin Study.

    PubMed

    Finkel, Deborah; Davis, Deborah Winders; Turkheimer, Eric; Dickens, William T

    2015-11-01

    Biometric latent growth curve models were applied to data from the LTS in order to replicate and extend Wilson's (Child Dev 54:298-316, 1983) findings. Assessments of cognitive development were available from 8 measurement occasions covering the period 4-15 years for 1032 individuals. Latent growth curve models were fit to percent correct for 7 subscales: information, similarities, arithmetic, vocabulary, comprehension, picture completion, and block design. Models were fit separately to WPPSI (ages 4-6 years) and WISC-R (ages 7-15). Results indicated the expected increases in heritability in younger childhood, and plateaus in heritability as children reached age 10 years. Heritability of change, per se (slope estimates), varied dramatically across domains. Significant genetic influences on slope parameters that were independent of initial levels of performance were found for only information and picture completion subscales. Thus evidence for both genetic continuity and genetic innovation in the development of cognitive abilities in childhood were found.

  5. A Unified Conformational Selection and Induced Fit Approach to Protein-Peptide Docking

    PubMed Central

    Trellet, Mikael; Melquiond, Adrien S. J.; Bonvin, Alexandre M. J. J.

    2013-01-01

    Protein-peptide interactions are vital for the cell. They mediate, inhibit or serve as structural components in nearly 40% of all macromolecular interactions, and are often associated with diseases, making them interesting leads for protein drug design. In recent years, large-scale technologies have enabled exhaustive studies on the peptide recognition preferences for a number of peptide-binding domain families. Yet, the paucity of data regarding their molecular binding mechanisms together with their inherent flexibility makes the structural prediction of protein-peptide interactions very challenging. This leaves flexible docking as one of the few amenable computational techniques to model these complexes. We present here an ensemble, flexible protein-peptide docking protocol that combines conformational selection and induced fit mechanisms. Starting from an ensemble of three peptide conformations (extended, a-helix, polyproline-II), flexible docking with HADDOCK generates 79.4% of high quality models for bound/unbound and 69.4% for unbound/unbound docking when tested against the largest protein-peptide complexes benchmark dataset available to date. Conformational selection at the rigid-body docking stage successfully recovers the most relevant conformation for a given protein-peptide complex and the subsequent flexible refinement further improves the interface by up to 4.5 Å interface RMSD. Cluster-based scoring of the models results in a selection of near-native solutions in the top three for ∼75% of the successfully predicted cases. This unified conformational selection and induced fit approach to protein-peptide docking should open the route to the modeling of challenging systems such as disorder-order transitions taking place upon binding, significantly expanding the applicability limit of biomolecular interaction modeling by docking. PMID:23516555

  6. A unified conformational selection and induced fit approach to protein-peptide docking.

    PubMed

    Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2013-01-01

    Protein-peptide interactions are vital for the cell. They mediate, inhibit or serve as structural components in nearly 40% of all macromolecular interactions, and are often associated with diseases, making them interesting leads for protein drug design. In recent years, large-scale technologies have enabled exhaustive studies on the peptide recognition preferences for a number of peptide-binding domain families. Yet, the paucity of data regarding their molecular binding mechanisms together with their inherent flexibility makes the structural prediction of protein-peptide interactions very challenging. This leaves flexible docking as one of the few amenable computational techniques to model these complexes. We present here an ensemble, flexible protein-peptide docking protocol that combines conformational selection and induced fit mechanisms. Starting from an ensemble of three peptide conformations (extended, a-helix, polyproline-II), flexible docking with HADDOCK generates 79.4% of high quality models for bound/unbound and 69.4% for unbound/unbound docking when tested against the largest protein-peptide complexes benchmark dataset available to date. Conformational selection at the rigid-body docking stage successfully recovers the most relevant conformation for a given protein-peptide complex and the subsequent flexible refinement further improves the interface by up to 4.5 Å interface RMSD. Cluster-based scoring of the models results in a selection of near-native solutions in the top three for ∼75% of the successfully predicted cases. This unified conformational selection and induced fit approach to protein-peptide docking should open the route to the modeling of challenging systems such as disorder-order transitions taking place upon binding, significantly expanding the applicability limit of biomolecular interaction modeling by docking.

  7. Phase Curve Analysis of Super-Earth 55 Cancri e

    NASA Astrophysics Data System (ADS)

    Angelo, Isabel; Hu, Renyu

    2018-01-01

    One of the primary questions when characterizing Earth-sized and super-Earth-sized exoplanets is whether they have a substantial atmosphere like Earth and Venus, or a bare-rock surface that may come with a tenuous atmosphere like Mercury. Phase curves of the planets in thermal emission provide clues to this question, because a substantial atmosphere would transport heat more efficiently than a bare-rock surface. Analyzing phase curve photometric data around secondary eclipse has previously been used to study energy transport in the atmospheres of hot Jupiters. Here we use phase curve, Spitzer time-series photometry to study the thermal emission properties of the super-Earth exoplanet 55 Cancri e. We utilize a previously developed semi-analytical framework to fit a physical model to infrared photometric data of host star 55 Cancri from the Spitzer telescope IRAC 2 band at 4.5 μm. The model uses various parameters of planetary properties including Bond albedo, heat redistribution efficiency (i.e., the ratio between the radiative timescale and advective timescale of the photosphere), and atmospheric greenhouse factor. The phase curve of 55 Cancri e is dominated by thermal emission with an eastward-shifted hot spot located on the planet surface. We determine the heat redistribution efficiency to be ≈1.47, which implies that the advective timescale is on the same order as the radiative timescale. This requirement from the phase curve cannot be met by the bare-rock planet scenario, because heat transport by currents of molten lava would be too slow. The phase curve thus favors the scenario with a substantial atmosphere. Our constraints on the heat redistribution efficiency translate to a photosphere pressure of ~1.4 bar. The Spitzer IRAC 2 band is thus a window into the deep atmosphere of the planet 55 Cancri e.

  8. Ring polymer dynamics in curved spaces

    NASA Astrophysics Data System (ADS)

    Wolf, S.; Curotto, E.

    2012-07-01

    We formulate an extension of the ring polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, {T}^1, mapped by a stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the ring polymer Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position-position autocorrelation function can be formulated in numerous ways. We find that the position-position autocorrelation function computed from configurations in the Euclidean space {R}^2 that contains {T}^1 as a submanifold has the best statistical properties. The agreement with exact results obtained with vector space methods is excellent for all three potentials, for all values of time in the interval simulated, and for a relatively broad range of temperatures.

  9. Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test

    ERIC Educational Resources Information Center

    Garn, Alex; Sun, Haichun

    2009-01-01

    The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…

  10. In Search of Golden Rules: Comment on Hypothesis-Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler's (1999) Findings

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin

    2004-01-01

    Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…

  11. Growth curves for ostriches (Struthio camelus) in a Brazilian population.

    PubMed

    Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P

    2013-01-01

    The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.

  12. Inferring genetic interactions from comparative fitness data.

    PubMed

    Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko

    2017-12-20

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.

  13. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    PubMed Central

    2010-01-01

    Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791

  14. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.

    PubMed

    Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin

    2010-05-18

    The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  15. Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach.

    PubMed

    Hawkley, Louise C; Capitanio, John P

    2015-05-26

    Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship--as indexed by perceived social isolation (i.e. loneliness)--with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary-adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a 'lonely' phenotype in social species, and its consequences for health and fitness. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. A study of Lusitano mare lactation curve with Wood's model.

    PubMed

    Santos, A S; Silvestre, A M

    2008-02-01

    Milk yield and composition data from 7 nursing Lusitano mares (450 to 580 kg of body weight and 2 to 9 parities) were used in this study (5 measurements per mare for milk yield and 8 measurements for composition). Wood's lactation model was used to describe milk fat, protein, and lactose lactation curves. Mean values for the concentration of major milk components across the lactation period (180 d) were 5.9 g/kg of fat, 18.4 g/kg of protein, and 60.8 g/kg of lactose. Milk fat and protein (g/kg) decreased and lactose (g/kg) increased during the 180 d of lactation. Curves for milk protein and lactose yields (g) were similar in shape to the milk yield curve; protein yield peaked at 307 g on d 10 and lactose peaked at 816 g on d 45. The fat (g) curve was different in shape compared with milk, protein, and lactose yields. Total production of the major milk constituents throughout the 180 d of lactation was estimated to be 12.0, 36.1, and 124 kg for fat, protein, and lactose, respectively. The algebraic model fitted by a nonlinear regression procedure to the data resulted in reasonable prediction curves for milk yield (R(a)(2) of 0.89) and the major constituents (R(a)(2) ranged from 0.89 to 0.95). The lactation curves of major milk constituents in Lusitano mares were similar, both in shape and values, to those found in other horse breeds. The established curves facilitate the estimation of milk yield and variation of milk constituents at different stages of lactation for both nursing and dairy mares, providing important information relative to weaning time and foal supplementation.

  17. Cardiovascular health and fitness after stroke.

    PubMed

    Ivey, F M; Macko, R F; Ryan, A S; Hafer-Macko, C E

    2005-01-01

    Stroke patients have profound cardiovascular and muscular deconditioning, with metabolic fitness levels that are about half those found in age-matched sedentary controls. Physical deconditioning, along with elevated energy demands of hemiparetic gait, define a detrimental combination termed diminished physiological fitness reserve that can greatly limit that can greatly limit performance of activities of daily living. The physiological features that underlie worsening metabolic fitness in the chronic phase of stroke include gross muscular atrophy, altered muscle molecular phenotype, increased intramuscular area fat, elevated tissue inflammatory markers, and diminished peripheral blood flow dynamics. Epidemiological evidence further suggests that the reduced cardiovascular fitness and secondary biological changes in muscle may propagate components of the metabolic syndrome, conferring added morbidity and mortality risk. This article reviews some of the consequences of poor fitness in chronic stroke and the potential biological underpinnings that support a rationale for more aggressive approaches to exercise therapy in this population.

  18. Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Chen, Shuyue; Jiang, Xing; Lu, Guirong

    2017-07-01

    A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.

  19. Constraining brane tension using rotation curves of galaxies

    NASA Astrophysics Data System (ADS)

    García-Aspeitia, Miguel A.; Rodríguez-Meza, Mario A.

    2018-04-01

    We present in this work a study of brane theory phenomenology focusing on the brane tension parameter, which is the main observable of the theory. We show the modifications steaming from the presence of branes in the rotation curves of spiral galaxies for three well known dark matter density profiles: Pseudo isothermal, Navarro-Frenk-White and Burkert dark matter density profiles. We estimate the brane tension parameter using a sample of high resolution observed rotation curves of low surface brightness spiral galaxies and a synthetic rotation curve for the three density profiles. Also, the fittings using the brane theory model of the rotation curves are compared with standard Newtonian models. We found that Navarro-Frenk-White model prefers lower values of the brane tension parameter, on the average λ ∼ 0.73 × 10‑3eV4, therefore showing clear brane effects. Burkert case does prefer higher values of the tension parameter, on the average λ ∼ 0.93 eV4 ‑ 46 eV4, i.e., negligible brane effects. Whereas pseudo isothermal is an intermediate case. Due to the low densities found in the galactic medium it is almost impossible to find evidence of the presence of extra dimensions. In this context, we found that our results show weaker bounds to the brane tension values in comparison with other bounds found previously, as the lower value found for dwarf stars composed of a polytropic equation of state, λ ≈ 104 MeV4.

  20. Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach

    ERIC Educational Resources Information Center

    Perkmen, Serkan; Cevik, Beste; Alkan, Mahir

    2012-01-01

    Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…

  1. Fitness and Individuality in Complex Life Cycles.

    PubMed

    Herron, Matthew D

    2016-12-01

    Complex life cycles are common in the eukaryotic world, and they complicate the question of how to define individuality. Using a bottom-up, gene-centric approach, I consider the concept of fitness in the context of complex life cycles. I analyze the fitness effects of an allele (or a trait) on different biological units within a complex life history and how these effects drive evolutionary change within populations. Based on these effects, I attempt to construct a concept of fitness that accurately predicts evolutionary change in the context of complex life cycles.

  2. Bragg Curve, Biological Bragg Curve and Biological Issues in Space Radiation Protection with Shielding

    NASA Technical Reports Server (NTRS)

    Honglu, Wu; Cucinotta, F.A.; Durante, M.; Lin, Z.; Rusek, A.

    2006-01-01

    The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X-rays, the presence of shielding does not always reduce the radiation risks for energetic charged particle exposure. Since the dose delivered by the charged particle increases sharply as the particle approaches the end of its range, a position known as the Bragg peak, the Bragg curve does not necessarily represent the biological damage along the particle traversal since biological effects are influenced by the track structure of both primary and secondary particles. Therefore, the biological Bragg curve is dependent on the energy and the type of the primary particle, and may vary for different biological endpoints. To achieve a Bragg curve distribution, we exposed cells to energetic heavy ions with the beam geometry parallel to a monolayer of fibroblasts. Qualitative analyses of gamma-H2AX fluorescence, a known marker of DSBs, indicated increased clustering of DNA damage before the Bragg peak, enhanced homogenous distribution at the peak, and provided visual evidence of high linear energy transfer (LET) particle traversal of cells beyond the Bragg peak. A quantitative biological response curve generated for micronuclei (MN) induction across the Bragg curve did not reveal an increased yield of MN at the location of the Bragg peak. However, the ratio of mono-to bi-nucleated cells, which indicates inhibition in cell progression, increased at the Bragg peak location. These results, along with other biological concerns, show that space radiation protection with shielding can be a complicated issue.

  3. Nonlinear Growth Curves in Developmental Research

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131

  4. Computational methods for yeast prion curing curves.

    PubMed

    Ridout, Martin S

    2008-10-01

    If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.

  5. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  6. CONSTRAINING THE STRING GAUGE FIELD BY GALAXY ROTATION CURVES AND PERIHELION PRECESSION OF PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Yeuk-Kwan E.; Xu Feng, E-mail: cheung@nju.edu.cn

    2013-09-01

    We discuss a cosmological model in which the string gauge field coupled universally to matter gives rise to an extra centripetal force and will have observable signatures on cosmological and astronomical observations. Several tests are performed using data including galaxy rotation curves of 22 spiral galaxies of varied luminosities and sizes and perihelion precessions of planets in the solar system. The rotation curves of the same group of galaxies are independently fit using a dark matter model with the generalized Navarro-Frenk-White (NFW) profile and the string model. A remarkable fit of galaxy rotation curves is achieved using the one-parameter stringmore » model as compared to the three-parameter dark matter model with the NFW profile. The average {chi}{sup 2} value of the NFW fit is 9% better than that of the string model at a price of two more free parameters. Furthermore, from the string model, we can give a dynamical explanation for the phenomenological Tully-Fisher relation. We are able to derive a relation between field strength, galaxy size, and luminosity, which can be verified with data from the 22 galaxies. To further test the hypothesis of the universal existence of the string gauge field, we apply our string model to the solar system. Constraint on the magnitude of the string field in the solar system is deduced from the current ranges for any anomalous perihelion precession of planets allowed by the latest observations. The field distribution resembles a dipole field originating from the Sun. The string field strength deduced from the solar system observations is of a similar magnitude as the field strength needed to sustain the rotational speed of the Sun inside the Milky Way. This hypothesis can be tested further by future observations with higher precision.« less

  7. Constraining the String Gauge Field by Galaxy Rotation Curves and Perihelion Precession of Planets

    NASA Astrophysics Data System (ADS)

    Cheung, Yeuk-Kwan E.; Xu, Feng

    2013-09-01

    We discuss a cosmological model in which the string gauge field coupled universally to matter gives rise to an extra centripetal force and will have observable signatures on cosmological and astronomical observations. Several tests are performed using data including galaxy rotation curves of 22 spiral galaxies of varied luminosities and sizes and perihelion precessions of planets in the solar system. The rotation curves of the same group of galaxies are independently fit using a dark matter model with the generalized Navarro-Frenk-White (NFW) profile and the string model. A remarkable fit of galaxy rotation curves is achieved using the one-parameter string model as compared to the three-parameter dark matter model with the NFW profile. The average χ2 value of the NFW fit is 9% better than that of the string model at a price of two more free parameters. Furthermore, from the string model, we can give a dynamical explanation for the phenomenological Tully-Fisher relation. We are able to derive a relation between field strength, galaxy size, and luminosity, which can be verified with data from the 22 galaxies. To further test the hypothesis of the universal existence of the string gauge field, we apply our string model to the solar system. Constraint on the magnitude of the string field in the solar system is deduced from the current ranges for any anomalous perihelion precession of planets allowed by the latest observations. The field distribution resembles a dipole field originating from the Sun. The string field strength deduced from the solar system observations is of a similar magnitude as the field strength needed to sustain the rotational speed of the Sun inside the Milky Way. This hypothesis can be tested further by future observations with higher precision.

  8. Semi-exact concentric atomic density fitting: Reduced cost and increased accuracy compared to standard density fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollman, David S.; Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061; Schaefer, Henry F.

    2014-02-14

    A local density fitting scheme is considered in which atomic orbital (AO) products are approximated using only auxiliary AOs located on one of the nuclei in that product. The possibility of variational collapse to an unphysical “attractive electron” state that can affect such density fitting [P. Merlot, T. Kjærgaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, and T. B. Pedersen, J. Comput. Chem. 34, 1486 (2013)] is alleviated by including atom-wise semidiagonal integrals exactly. Our approach leads to a significant decrease in the computational cost of density fitting for Hartree–Fock theory while still producing results with errors 2–5 timesmore » smaller than standard, nonlocal density fitting. Our method allows for large Hartree–Fock and density functional theory computations with exact exchange to be carried out efficiently on large molecules, which we demonstrate by benchmarking our method on 200 of the most widely used prescription drug molecules. Our new fitting scheme leads to smooth and artifact-free potential energy surfaces and the possibility of relatively simple analytic gradients.« less

  9. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  10. Testing modified gravity at large distances with the HI Nearby Galaxy Survey's rotation curves

    NASA Astrophysics Data System (ADS)

    Mastache, Jorge; Cervantes-Cota, Jorge L.; de la Macorra, Axel

    2013-03-01

    Recently a new—quantum motivated—theory of gravity has been proposed that modifies the standard Newtonian potential at large distances when spherical symmetry is considered. Accordingly, Newtonian gravity is altered by adding an extra Rindler acceleration term that has to be phenomenologically determined. Here we consider a standard and a power-law generalization of the Rindler modified Newtonian potential. The new terms in the gravitational potential are hypothesized to play the role of dark matter in galaxies. Our galactic model includes the mass of the integrated gas, and stars for which we consider three stellar mass functions (Kroupa, diet-Salpeter, and free mass model). We test this idea by fitting rotation curves of seventeen low surface brightness galaxies from the HI Nearby Galaxy Survey (THINGS). We find that the Rindler parameters do not perform a suitable fit to the rotation curves in comparison to standard dark matter profiles (Navarro-Frenk-White and Burkert) and, in addition, the computed parameters of the Rindler gravity show a high spread, posing the model as a nonacceptable alternative to dark matter.

  11. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    NASA Astrophysics Data System (ADS)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  12. Inferring genetic interactions from comparative fitness data

    PubMed Central

    2017-01-01

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax, the fungus Aspergillus niger, and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations. PMID:29260711

  13. Integrating the Levels of Person-Environment Fit: The Roles of Vocational Fit and Group Fit

    ERIC Educational Resources Information Center

    Vogel, Ryan M.; Feldman, Daniel C.

    2009-01-01

    Previous research on fit has largely focused on person-organization (P-O) fit and person-job (P-J) fit. However, little research has examined the interplay of person-vocation (P-V) fit and person-group (P-G) fit with P-O fit and P-J fit in the same study. This article advances the fit literature by examining these relationships with data collected…

  14. Toward a Conceptualization of Perceived Work-Family Fit and Balance: A Demands and Resources Approach

    ERIC Educational Resources Information Center

    Voydanoff, Patricia

    2005-01-01

    Using person-environment fit theory, this article formulates a conceptual model that links work, family, and boundary-spanning demands and resources to work and family role performance and quality. Linking mechanisms include 2 dimensions of perceived work-family fit (work demands--family resources fit and family demands--work resources fit) and a…

  15. Exploration and extension of an improved Riemann track fitting algorithm

    NASA Astrophysics Data System (ADS)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  16. Random-growth urban model with geographical fitness

    NASA Astrophysics Data System (ADS)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  17. Neutron/Gamma-ray discrimination through measures of fit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek

    2015-07-01

    Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less

  18. Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Ryu, Ehri; West, Stephen G.

    2009-01-01

    In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…

  19. Melting Curve of Molecular Crystal GeI4

    NASA Astrophysics Data System (ADS)

    Fuchizaki, Kazuhiro; Hamaya, Nozomu

    2014-07-01

    In situ synchrotron x-ray diffraction measurements were carried out to determine the melting curve of the molecular crystal GeI4. We found that the melting line rapidly increases with a pressure up to about 3 GPa, at which it abruptly breaks. Such a strong nonlinear shape of the melting curve can be approximately captured by the Kumari-Dass-Kechin equation. The parameters involved in the equation could be determined from the equation of state for the crystalline phase, which was also established in the present study. The melting curve predicted from the equation approaches the actual melting curve as the degree of approximation involved in obtaining the equation is improved. However, the treatment is justifiable only if the slope of the melting curve is everywhere continuous. We believe that this is not the case for GeI4's melting line at the breakpoint, as inferred from the nature of breakdown of the Kraut-Kennedy and the Magalinskii-Zubov relationships.The breakpoint may then be a triple point among the crystalline phase and two possible liquid phases.

  20. Physical fitness and performance. Cardiorespiratory fitness in girls-change from middle to high school.

    PubMed

    Pfeiffer, Karin A; Dowda, Marsha; Dishman, Rod K; Sirard, John R; Pate, Russell R

    2007-12-01

    To determine how factors are related to change in cardiorespiratory fitness (CRF) across time in middle school girls followed through high school. Adolescent girls (N = 274, 59% African American, baseline age = 13.6 +/- 0.6 yr) performed a submaximal fitness test (PWC170) in 8th, 9th, and 12th grades. Height, weight, sports participation, and physical activity were also measured. Moderate-to-vigorous physical activity (MVPA) and vigorous physical activity (VPA) were determined by the number of blocks reported on the 3-Day Physical Activity Recall (3DPAR). Individual differences and developmental change in CRF were assessed simultaneously by calculating individual growth curves for each participant, using growth curve modeling. Both weight-relative and absolute CRF increased from 8th to 9th grade and decreased from 9th to 12th grade. On average, girls lost 0.16 kg.m.min.kg.yr in weight-relative PWC170 scores (P < 0.01) and gained 10.3 kg.m.min.yr in absolute PWC170 scores. Girls reporting two or more blocks of MVPA or one or more blocks of VPA at baseline showed an average increase in PWC170 scores of 0.40-0.52 kg.m.min.kg.yr (weight relative) and 22-28 kg.m.min.yr (absolute) in CRF. In weight-relative models, girls with higher BMI showed lower CRF (approximately 0.37 g.m.min.kg.yr), but this was not shown in absolute models. In absolute models, white girls (approximately 40 kg.m.min.yr) and sport participants (approximately 28 kg.m.min.yr) showed an increase in CRF over time. Although there were fluctuations in PWC170 scores across time, average scores decreased during 4 yr. Physical activity was related to change in CRF over time; BMI, race, and sport participation were also important factors related to change over time in CRF (depending on expression of CRF-weight-relative vs absolute). Subsequent research should focus on explaining the complex longitudinal interactions between CRF, physical activity, race, BMI, and sports participation.

  1. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  2. Plant Photosynthesis-Irradiance Curve Responses to Pollution Show Non-Competitive Inhibited Michaelis Kinetics

    PubMed Central

    Lin, Maozi; Wang, Zhiwei; He, Lingchao; Xu, Kang; Cheng, Dongliang; Wang, Genxuan

    2015-01-01

    Photosynthesis-irradiance (PI) curves are extensively used in field and laboratory research to evaluate the photon-use efficiency of plants. However, most existing models for PI curves focus on the relationship between the photosynthetic rate (Pn) and photosynthetically active radiation (PAR), and do not take account of the influence of environmental factors on the curve. In the present study, we used a new non-competitive inhibited Michaelis-Menten model (NIMM) to predict the co-variation of Pn, PAR, and the relative pollution index (I). We then evaluated the model with published data and our own experimental data. The results indicate that the Pn of plants decreased with increasing I in the environment and, as predicted, were all fitted well by the NIMM model. Therefore, our model provides a robust basis to evaluate and understand the influence of environmental pollution on plant photosynthesis. PMID:26561863

  3. Comparison of two correlated ROC curves at a given specificity or sensitivity level.

    PubMed

    Bantis, Leonidas E; Feng, Ziding

    2016-10-30

    The receiver operating characteristic (ROC) curve is the most popular statistical tool for evaluating the discriminatory capability of a given continuous biomarker. The need to compare two correlated ROC curves arises when individuals are measured with two biomarkers, which induces paired and thus correlated measurements. Many researchers have focused on comparing two correlated ROC curves in terms of the area under the curve (AUC), which summarizes the overall performance of the marker. However, particular values of specificity may be of interest. We focus on comparing two correlated ROC curves at a given specificity level. We propose parametric approaches, transformations to normality, and nonparametric kernel-based approaches. Our methods can be straightforwardly extended for inference in terms of ROC -1 (t). This is of particular interest for comparing the accuracy of two correlated biomarkers at a given sensitivity level. Extensions also involve inference for the AUC and accommodating covariates. We evaluate the robustness of our techniques through simulations, compare them with other known approaches, and present a real-data application involving prostate cancer screening. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.

    We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full rangemore » of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.« less

  5. Gamma-Ray Pulsar Light Curves in Offset Polar Cap Geometry

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.; DeCesar, Megan; Miller, M. Coleman

    2011-01-01

    Recent studies have shown that gamma-ray pulsar light curves are very sensitive to the geometry of the pulsar magnetic field. Pulsar magnetic field geometries, such as the retarded vacuum dipole and force-free magnetospheres, used to model high-energy light curves have distorted polar caps that are offset from the magnetic axis in the direction opposite to rotation. Since this effect is due to the sweepback of field lines near the light cylinder, offset polar caps are a generic property of pulsar magnetospheres and their effects should be included in gamma-ray pulsar light curve modeling. In slot gap models (having two-pole caustic geometry), the offset polar caps cause a strong azimuthal asymmetry of the particle acceleration around the magnetic axis. We have studied the effect of the offset polar caps in both retarded vacuum dipole and force-free geometry on the model high-energy pulse profile. We find that. corn pared to the profile:-; derived from :-;ymmetric caps, the flux in the pulse peaks, which are caustics formed along the trailing magnetic field lines. increases significantly relative to the off-peak emission. formed along leading field lines. The enhanced contrast produces greatly improved slot gap model fits to Fermi pulsar light curves like Vela, which show very little off-peak emIssIon.

  6. On curve veering and flutter of rotating blades

    NASA Technical Reports Server (NTRS)

    Afolabi, Dare; Mehmed, Oral

    1993-01-01

    The eigenvalues of rotating blades usually change with rotation speed according to the Stodola-Southwell criterion. Under certain circumstances, the loci of eigenvalues belonging to two distinct modes of vibration approach each other very closely, and it may appear as if the loci cross each other. However, our study indicates that the observable frequency loci of an undamped rotating blade do not cross, but must either repel each other (leading to 'curve veering'), or attract each other (leading to 'frequency coalescence'). Our results are reached by using standard arguments from algebraic geometry--the theory of algebraic curves and catastrophe theory. We conclude that it is important to resolve an apparent crossing of eigenvalue loci into either a frequency coalescence or a curve veering, because frequency coalescence is dangerous since it leads to flutter, whereas curve veering does not precipitate flutter and is, therefore, harmless with respect to elastic stability.

  7. The Personalized System of Instruction in Fitness Education

    ERIC Educational Resources Information Center

    Colquitt, Gavin; Pritchard, Tony; McCollum, Starla

    2011-01-01

    Fitness education is an important component in the promotion of health-enhancing behaviors that lead to physically active lifestyles. However, using a teacher-centered approach to teach fitness may be boring and may limit the application of the content to the individual student. The personalized system of instruction (PSI) is an instructional…

  8. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    PubMed

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  9. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less

  10. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    PubMed

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  11. [Fitting Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lenses in keratoconus patients: a prospective randomized comparative clinical trial].

    PubMed

    Coral-Ghanem, Cleusa; Alves, Milton Ruiz

    2008-01-01

    To evaluate the clinical performance of Monocurve and Bicurve (Soper-McGuire design) rigid gas-permeable contact lens fitting in patients with keratoconus. A prospective and randomized comparative clinical trial was conducted with a minimum follow-up of six months in two groups of 63 patients. One group was fitted with Monocurve contact lenses and the other with Bicurve Soper-McGuire design. Study variables included fluoresceinic pattern of lens-to-cornea fitting relationship, location and morphology of the cone, presence and degree of punctate keratitis and other corneal surface alterations, topographic changes, visual acuity for distance corrected with contact lenses and survival analysis for remaining with the same contact lens design during the study. During the follow-up there was a decrease in the number of eyes with advanced and central cones fitted with Monocurve lenses, and an increase in those fitted with Soper-McGuire design. In the Monocurve group, a flattening of both the steepest and the flattest keratometric curve was observed. In the Soper-McGuire group, a steepening of the flattest keratometric curve and a flattening of the steepest keratometric curve were observed. There was a decrease in best-corrected visual acuity with contact lens in the Monocurve group. Survival analysis for the Monocurve lens was 60.32% and for the Soper-McGuire was 71.43% at a mean follow-up of six months. This study showed that due to the changes observed in corneal topography, the same contact lens design did not provide an ideal fitting for all patients during the follow-up period. The Soper-McGuire lenses had a better performance than the Monocurve lenses in advanced and central keratoconus.

  12. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  13. A Tangent Bundle Theory for Visual Curve Completion.

    PubMed

    Ben-Yosef, Guy; Ben-Shahar, Ohad

    2012-07-01

    Visual curve completion is a fundamental perceptual mechanism that completes the missing parts (e.g., due to occlusion) between observed contour fragments. Previous research into the shape of completed curves has generally followed an "axiomatic" approach, where desired perceptual/geometrical properties are first defined as axioms, followed by mathematical investigation into curves that satisfy them. However, determining psychophysically such desired properties is difficult and researchers still debate what they should be in the first place. Instead, here we exploit the observation that curve completion is an early visual process to formalize the problem in the unit tangent bundle R(2) × S(1), which abstracts the primary visual cortex (V1) and facilitates exploration of basic principles from which perceptual properties are later derived rather than imposed. Exploring here the elementary principle of least action in V1, we show how the problem becomes one of finding minimum-length admissible curves in R(2) × S(1). We formalize the problem in variational terms, we analyze it theoretically, and we formulate practical algorithms for the reconstruction of these completed curves. We then explore their induced visual properties vis-à-vis popular perceptual axioms and show how our theory predicts many perceptual properties reported in the corresponding perceptual literature. Finally, we demonstrate a variety of curve completions and report comparisons to psychophysical data and other completion models.

  14. Molecular dynamics study of the melting curve of NiTi alloy under pressure

    NASA Astrophysics Data System (ADS)

    Zeng, Zhao-Yi; Hu, Cui-E.; Cai, Ling-Cang; Chen, Xiang-Rong; Jing, Fu-Qian

    2011-02-01

    The melting curve of NiTi alloy was predicted by using molecular dynamics simulations combining with the embedded atom model potential. The calculated thermal equation of state consists well with our previous results obtained from quasiharmonic Debye approximation. Fitting the well-known Simon form to our Tm data yields the melting curves for NiTi: 1850(1 + P/21.938)0.328 (for one-phase method) and 1575(1 + P/7.476)0.305 (for two-phase method). The two-phase simulations can effectively eliminate the superheating in one-phase simulations. At 1 bar, the melting temperature of NiTi is 1575 ± 25 K and the corresponding melting slope is 64 K/GPa.

  15. Physical fitness reference standards in European children: the IDEFICS study.

    PubMed

    De Miguel-Etayo, P; Gracia-Marco, L; Ortega, F B; Intemann, T; Foraita, R; Lissner, L; Oja, L; Barba, G; Michels, N; Tornaritis, M; Molnár, D; Pitsiladis, Y; Ahrens, W; Moreno, L A

    2014-09-01

    A low fitness status during childhood and adolescence is associated with important health-related outcomes, such as increased future risk for obesity and cardiovascular diseases, impaired skeletal health, reduced quality of life and poor mental health. Fitness reference values for adolescents from different countries have been published, but there is a scarcity of reference values for pre-pubertal children in Europe, using harmonised measures of fitness in the literature. The IDEFICS study offers a good opportunity to establish normative values of a large set of fitness components from eight European countries using common and well-standardised methods in a large sample of children. Therefore, the aim of this study is to report sex- and age-specific fitness reference standards in European children. Children (10,302) aged 6-10.9 years (50.7% girls) were examined. The test battery included: the flamingo balance test, back-saver sit-and-reach test (flexibility), handgrip strength test, standing long jump test (lower-limb explosive strength) and 40-m sprint test (speed). Moreover, cardiorespiratory fitness was assessed by a 20-m shuttle run test. Percentile curves for the 1st, 3rd, 10th, 25th, 50th, 75th, 90th, 97th and 99th percentiles were calculated using the General Additive Model for Location Scale and Shape (GAMLSS). Our results show that boys performed better than girls in speed, lower- and upper-limb strength and cardiorespiratory fitness, and girls performed better in balance and flexibility. Older children performed better than younger children, except for cardiorespiratory fitness in boys and flexibility in girls. Our results provide for the first time sex- and age-specific physical fitness reference standards in European children aged 6-10.9 years.

  16. Is the learning curve endless? One surgeon's experience with robotic prostatectomy

    NASA Astrophysics Data System (ADS)

    Patel, Vipul; Thaly, Rahul; Shah, Ketul

    2007-02-01

    Introduction: After performing 1,000 robotic prostatectomies we reflected back on our experience to determine what defined the learning curve and the essential elements that were the keys to surmounting it. Method: We retrospectively assessed our experience to attempt to define the learning curve(s), key elements of the procedure, technical refinements and changes in technology that facilitated our progress. Result: The initial learning curve to achieve basic competence and the ability to smoothly perform the procedure in less than 4 hours with acceptable outcomes was approximately 25 cases. A second learning curve was present between 75-100 cases as we approached more complicated patients. At 200 cases we were comfortably able to complete the procedure routinely in less than 2.5 hours with no specific step of the procedure hindering our progression. At 500 cases we had the introduction of new instrumentation (4th arm, biopolar Maryland, monopolar scissors) that changed our approach to the bladder neck and neurovascular bundle dissection. The most challenging part of the procedure was the bladder neck dissection. Conclusion: There is no single parameter that can be used to assess or define the learning curve. We used a combination of factors to make our subjective definition this included: operative time, smoothness of technical progression during the case along with clinical outcomes. The further our case experience progressed the more we expected of our outcomes, thus we continually modified our technique and hence embarked upon yet a new learning curve.

  17. After Six Decades: Applying the U-Curve Hypothesis to the Adjustment of International Postgraduate Students

    ERIC Educational Resources Information Center

    Chien, Yu-Yi Grace

    2016-01-01

    The research described in this article concludes that the widely cited U-curve hypothesis is no longer supported by research data because the adjustment of international postgraduate students is a complex phenomenon that does not fit easily with attempts to define and categorize it. Methodological issues, different internal and external factors,…

  18. Brief Report: Examining children’s disruptive behavior in the wake of trauma - A two-piece growth curve model before and after a school shooting

    PubMed Central

    Liao, Yue; Shonkoff, Eleanor T.; Barnett, Elizabeth; Wen, CK Fred; Miller, Kimberly A.; Eddy, J. Mark

    2015-01-01

    School shootings may have serious negative impacts on children years after the event. Previous research suggests that children exposed to traumatic events experience heightened fear, anxiety, and feelings of vulnerability, but little research has examined potential aggressive and disruptive behavioral reactions. Utilizing a longitudinal dataset in which a local school shooting occurred during the course of data collection, this study sought to investigate whether the trajectory of disruptive behaviors was affected by the shooting. A two-piece growth curve model was used to examine the trajectory of disruptive behaviors during the pre-shooting years (i.e., piece one) and post-shooting years (i.e., piece two). Results indicated that the two-piece growth curve model fit the data better than the one-piece model and that the school shooting precipitated a faster decline in aggressive behaviors. This study demonstrated a novel approach to examining effects of an unexpected traumatic event on behavioral trajectories using an existing longitudinal data set. PMID:26298676

  19. Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob

    2009-01-01

    Bayesian networks (BNs) are used to represent and efficiently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to find application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and find that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using

  20. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  1. Spherical images and inextensible curved folding

    NASA Astrophysics Data System (ADS)

    Seffen, Keith A.

    2018-02-01

    In their study, Duncan and Duncan [Proc. R. Soc. London A 383, 191 (1982), 10.1098/rspa.1982.0126] calculate the shape of an inextensible surface folded in two about a general curve. They find the analytical relationships between pairs of generators linked across the fold curve, the shape of the original path, and the fold angle variation along it. They present two special cases of generator layouts for which the fold angle is uniform or the folded curve remains planar, for simplifying practical folding in sheet-metal processes. We verify their special cases by a graphical treatment according to a method of Gauss. We replace the fold curve by a piecewise linear path, which connects vertices of intersecting pairs of hinge lines. Inspired by the d-cone analysis by Farmer and Calladine [Int. J. Mech. Sci. 47, 509 (2005), 10.1016/j.ijmecsci.2005.02.013], we construct the spherical images for developable folding of successive vertices: the operating conditions of the special cases in Duncan and Duncan are then revealed straightforwardly by the geometric relationships between the images. Our approach may be used to synthesize folding patterns for novel deployable and shape-changing surfaces without need of complex calculation.

  2. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  3. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  4. Modeling Phase-Aligned Gamma-Ray And Radio Millisecond Pulsar Light Curves

    DOE PAGES

    Venter, C.; Johnson, T. J.; Harding, A. K.

    2011-12-12

    The gamma-ray population of millisecond pulsars (MSPs) detected by the Fermi Large Area Telescope (LAT) has been steadily increasing. A number of the more recent detections, including PSR J0034-0534, PSR J1939+2134 (B1937+21; the first MSP ever discovered), PSR J1959+2048 (B1957+20; the first black widow system), and PSR J2214+3000, exhibit an unusual phenomenon: nearly phase-aligned radio and gamma- ray light curves (LCs). To account for the phase alignment, we explore geometric models where both the radio and gamma-ray emission originate either in the outer magnetosphere near the light cylinder (R LC) or near the polar caps (PCs). We obtain reasonable fitsmore » for the first three of these MSPs in the context of “altitude- limited” outer gap (alOG) and two-pole caustic (alTPC) geometries. The outer magnetosphere phase-aligned models differ from the standard outer gap (OG) / two-pole caustic (TPC) models in two respects: first, the radio emission originates in caustics at relatively high altitudes compared to the usual low-altitude conal radio beams; second, we allow the maximum altitude of the gamma-ray emission region as well as both the minimum and maximum altitudes of the radio emission region to vary within a limited range. Alternatively, there also exist phase-aligned LC solutions for emission originating near the stellar surface in a slot gap (SG) scenario (“low-altitude slot gap” (laSG) models). We find best-fit LCs using a Markov chain Monte Carlo (MCMC) max- imum likelihood approach [30]. Our fits imply that the phase-aligned LCs are likely of caustic origin, produced in the outer magnetosphere, and that the radio emission may come from close to R LC. We lastly constrain the emission altitudes with typical uncertainties of ~ 0.3RLC. Our results describe a third gamma-ray MSP subclass, in addition to the two (with non-aligned LCs) previously found [50]: those with LCs fit by standard OG / TPC models, and those with LCs fit by pair-starved polar

  5. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  6. A spectral-Tchebychev solution for three-dimensional dynamics of curved beams under mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Bediz, Bekir; Aksoy, Serdar

    2018-01-01

    This paper presents the application of the spectral-Tchebychev (ST) technique for solution of three-dimensional dynamics of curved beams/structures having variable and arbitrary cross-section under mixed boundary conditions. To accurately capture the vibrational behavior of curved structures, a three-dimensional (3D) solution approach is required since these structures generally exhibit coupled motions. In this study, the integral boundary value problem (IBVP) governing the dynamics of the curved structures is found using extended Hamilton's principle where the strain energy is expressed using 3D linear elasticity equation. To solve the IBVP numerically, the 3D spectral Tchebychev (3D-ST) approach is used. To evaluate the integral and derivative operations defined by the IBVP and to render the complex geometry into an equivalent straight beam with rectangular cross-section, a series of coordinate transformations are applied. To validate and assess the performance of the presented solution approach, two case studies are performed: (i) curved beam with rectangular cross-section, (ii) curved and pretwisted beam with airfoil cross-section. In both cases, the results (natural frequencies and mode shapes) are also found using a finite element (FE) solution approach. It is shown that the difference in predicted natural frequencies are less than 1%, and the mode shapes are in excellent agreement based on the modal assurance criteria (MAC) analyses; however, the presented spectral-Tchebychev solution approach significantly reduces the computational burden. Therefore, it can be concluded that the presented solution approach can capture the 3D vibrational behavior of curved beams as accurately as an FE solution, but for a fraction of the computational cost.

  7. From Experiment to Theory: What Can We Learn from Growth Curves?

    PubMed

    Kareva, Irina; Karev, Georgy

    2018-01-01

    Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.

  8. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  9. Water retention curves and thermal insulating properties of Thermosand

    NASA Astrophysics Data System (ADS)

    Leibniz, Otto; Winkler, Gerfried; Birk, Steffen

    2010-05-01

    The heat loss and the efficiency of isolating material surrounding heat supply pipes are essential issues for the energy budget of heat supply pipe lines. Until now heat loss from the pipe is minimized by enlarging the polyurethane (PU) - insulation thickness around the pipe. As a new approach to minimize the heat loss a thermally insulating bedding material was developed and investigated. Conventional bedding sands cover all necessary soil mechanical properties, but have a high thermal conductivity from λ =1,5 to 1,7 W/(m K). A newly developed embedding material 'Thermosand' shows thermal properties from λ=0,18 W/(m K) (dry) up to 0,88 W/(m K) (wet). The raw material originates from the waste rock stockpiles of a coal mine near Fohnsdorf, Austria. With high temperatures up to nearly 1000 ° C and a special mineral mixture, a natural burned reddish material resembling clinker arises. The soilmechanical properties of Thermosand has been thoroughly investigated with laboratory testing and in situ investigations to determine compaction-, permeability- and shear-behaviour, stiffness and corresponding physical parameters. Test trenches along operational heat pipes with temperature-measurement along several cross-sections were constructed to compare conventional embedding materials with 'Thermosand'. To investigate the influence of varying moisture content on thermal conductivity a 1:1 large scale model test in the laboratory to simulate real insitu-conditions was established. Based on this model it is planned to develop numerical simulations concerning varying moisture contents and unsaturated soil mechanics with heat propagation, including the drying out of the soil during heat input. These simulations require the knowledge about the water retention properties of the material. Thus, water retention curves were measured using both steady-state tension and pressure techniques and the simplified evaporation method. The steady-state method employs a tension table (sand

  10. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting.

    PubMed

    Cassette, Philippe

    2016-03-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Assessment of two theoretical methods to estimate potentiometrictitration curves of peptides: comparison with experiment

    PubMed Central

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.

    2008-01-01

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the

  12. Noninvasive evaluation of left ventricular elastance according to pressure-volume curves modeling in arterial hypertension.

    PubMed

    Bonnet, Benjamin; Jourdan, Franck; du Cailar, Guilhem; Fesler, Pierre

    2017-08-01

    End-systolic left ventricular (LV) elastance ( E es ) has been previously calculated and validated invasively using LV pressure-volume (P-V) loops. Noninvasive methods have been proposed, but clinical application remains complex. The aims of the present study were to 1 ) estimate E es according to modeling of the LV P-V curve during ejection ("ejection P-V curve" method) and validate our method with existing published LV P-V loop data and 2 ) test the clinical applicability of noninvasively detecting a difference in E es between normotensive and hypertensive subjects. On the basis of the ejection P-V curve and a linear relationship between elastance and time during ejection, we used a nonlinear least-squares method to fit the pressure waveform. We then computed the slope and intercept of time-varying elastance as well as the volume intercept (V 0 ). As a validation, 22 P-V loops obtained from previous invasive studies were digitized and analyzed using the ejection P-V curve method. To test clinical applicability, ejection P-V curves were obtained from 33 hypertensive subjects and 32 normotensive subjects with carotid tonometry and real-time three-dimensional echocardiography during the same procedure. A good univariate relationship ( r 2  = 0.92, P < 0.005) and good limits of agreement were found between the invasive calculation of E es and our new proposed ejection P-V curve method. In hypertensive patients, an increase in arterial elastance ( E a ) was compensated by a parallel increase in E es without change in E a / E es In addition, the clinical reproducibility of our method was similar to that of another noninvasive method. In conclusion, E es and V 0 can be estimated noninvasively from modeling of the P-V curve during ejection. This approach was found to be reproducible and sensitive enough to detect an expected increase in LV contractility in hypertensive patients. Because of its noninvasive nature, this methodology may have clinical implications in

  13. Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.

    2010-01-01

    One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.

  14. Aerobic Fitness Level Affects Cardiovascular and Salivary Alpha Amylase Responses to Acute Psychosocial Stress.

    PubMed

    Wyss, Thomas; Boesch, Maria; Roos, Lilian; Tschopp, Céline; Frei, Klaus M; Annen, Hubert; La Marca, Roberto

    2016-12-01

    Good physical fitness seems to help the individual to buffer the potential harmful impact of psychosocial stress on somatic and mental health. The aim of the present study is to investigate the role of physical fitness levels on the autonomic nervous system (ANS; i.e. heart rate and salivary alpha amylase) responses to acute psychosocial stress, while controlling for established factors influencing individual stress reactions. The Trier Social Stress Test for Groups (TSST-G) was executed with 302 male recruits during their first week of Swiss Army basic training. Heart rate was measured continuously, and salivary alpha amylase was measured twice, before and after the stress intervention. In the same week, all volunteers participated in a physical fitness test and they responded to questionnaires on lifestyle factors and personal traits. A multiple linear regression analysis was conducted to determine ANS responses to acute psychosocial stress from physical fitness test performances, controlling for personal traits, behavioural factors, and socioeconomic data. Multiple linear regression revealed three variables predicting 15 % of the variance in heart rate response (area under the individual heart rate response curve during TSST-G) and four variables predicting 12 % of the variance in salivary alpha amylase response (salivary alpha amylase level immediately after the TSST-G) to acute psychosocial stress. A strong performance at the progressive endurance run (high maximal oxygen consumption) was a significant predictor of ANS response in both models: low area under the heart rate response curve during TSST-G as well as low salivary alpha amylase level after TSST-G. Further, high muscle power, non-smoking, high extraversion, and low agreeableness were predictors of a favourable ANS response in either one of the two dependent variables. Good physical fitness, especially good aerobic endurance capacity, is an important protective factor against health

  15. An investigation of motivational variables in CrossFit facilities.

    PubMed

    Partridge, Julie A; Knapp, Bobbi A; Massengale, Brittany D

    2014-06-01

    CrossFit is a growing fitness trend in the United States; however, little systematic research has addressed specific motivational principles within this unique exercise environment. The purpose of the study was to explore the influence of gender and membership time on perceptions of motivational climate and goals within the CrossFit environment. Specifically, people may set goals related to self-improvement (i.e., mastery) or focus on their performance in comparison to others (i.e., performance). Motivational climate refers to an individual's perception of being encouraged to focus on either mastery or performance goals from CrossFit trainers. A total of 144 members (88 females; 56 males) completed questionnaires to assess participants' perceptions of CrossFit goal structures and perceptions of the motivational climate encouraged by the trainer within their CrossFit box. Results indicated a significant main effect for gender on preferred goals (p ≤ 0.05), with males reporting higher levels of performance approach goals and females reporting higher levels of master avoidance goals. Participants who reported shorter membership times were found to have significantly higher mastery-related goals than individuals who reported longer membership times (p ≤ 0.05). The results from the study suggest that practitioners should consider how perceptions of the motivational climate and goals in group-based exercise settings such as CrossFit may vary based on demographic variables, and that these differences may impact how to most effectively motivate, encourage, and instruct group members, particularly with regard to helping members set goals that most effectively address their approach to the CrossFit regimen.

  16. Light curve variations of the eclipsing binary V367 Cygni

    NASA Astrophysics Data System (ADS)

    Akan, M. C.

    1987-07-01

    The long-period eclipsing binary star V367 Cygni has been observed photoelectrically in two colours, B and V, in 1984, 1985, and 1986. These new light curves of the system have been discussed and compared for the light-variability with the earlier ones presented by Heiser (1962). Using some of the previously published photoelectric light curves and the present ones, several primary minima times have been derived to calculate the light elements. Any attempt to obtain a photometric solution of the binary is complicated by the peculiar nature of the light curve caused by the presence of the circumstellar matter in the system. Despite this difficulty, however, some approaches are being carried out to solve the light curves which are briefly discussed.

  17. Prediction of Fitness to Drive in Patients with Alzheimer's Dementia

    PubMed Central

    Piersma, Dafne; Fuermaier, Anselm B. M.; de Waard, Dick; Davidse, Ragnhild J.; de Groot, Jolieke; Doumen, Michelle J. A.; Bredewoud, Ruud A.; Claesen, René; Lemstra, Afina W.; Vermeeren, Annemiek; Ponds, Rudolf; Verhey, Frans; Brouwer, Wiebo H.; Tucha, Oliver

    2016-01-01

    The number of patients with Alzheimer’s disease (AD) is increasing and so is the number of patients driving a car. To enable patients to retain their mobility while at the same time not endangering public safety, each patient should be assessed for fitness to drive. The aim of this study is to develop a method to assess fitness to drive in a clinical setting, using three types of assessments, i.e. clinical interviews, neuropsychological assessment and driving simulator rides. The goals are (1) to determine for each type of assessment which combination of measures is most predictive for on-road driving performance, (2) to compare the predictive value of clinical interviews, neuropsychological assessment and driving simulator evaluation and (3) to determine which combination of these assessments provides the best prediction of fitness to drive. Eighty-one patients with AD and 45 healthy individuals participated. All participated in a clinical interview, and were administered a neuropsychological test battery and a driving simulator ride (predictors). The criterion fitness to drive was determined in an on-road driving assessment by experts of the CBR Dutch driving test organisation according to their official protocol. The validity of the predictors to determine fitness to drive was explored by means of logistic regression analyses, discriminant function analyses, as well as receiver operating curve analyses. We found that all three types of assessments are predictive of on-road driving performance. Neuropsychological assessment had the highest classification accuracy followed by driving simulator rides and clinical interviews. However, combining all three types of assessments yielded the best prediction for fitness to drive in patients with AD with an overall accuracy of 92.7%, which makes this method highly valid for assessing fitness to drive in AD. This method may be used to advise patients with AD and their family members about fitness to drive. PMID:26910535

  18. New Risk Curves for NHTSA's Brain Injury Criterion (BrIC): Derivations and Assessments.

    PubMed

    Laituri, Tony R; Henry, Scott; Pline, Kevin; Li, Guosong; Frankstein, Michael; Weerappuli, Para

    2016-11-01

    were used. The assessments were conducted for minor, moderate, and serious injury levels for both Newer Vehicles (airbag-fitted) and Older Vehicles (not airbag-fitted). Curve-based injury rates and NASS-based injury rates were compared via average percent difference (AvgPctDiff). The new risk curves demonstrated significantly better fidelity than those from NHTSA. For example, for the aggregate perspective (n=12 assessments), the results were as follows: AvgPctDiff (present risk curves) = +67 versus AvgPctDiff (NHTSA risk curves) = +9378. Part 2 also contained a more comprehensive assessment. Specifically, BrIC-based risk curves were used to estimate brain-related injury probabilities, HIC15-based risk curves from NHTSA were used to estimate bone/other injury probabilities, and the maximum of the two resulting probabilities was used to represent the attendant headinjury probabilities. (Those HIC15-based risk curves yielded AvgPctDiff=+85 for that application.) Subject to the resulting 21 assessments, similar results were observed: AvgPctDiff (present risk curves) = +42 versus AvgPctDiff (NHTSA risk curves) = +5783. Therefore, based on the results from Part 2, if the existing BrIC metric is to be applied by NHTSA in vehicle assessment, we recommend that the corresponding risk curves derived in the present study be considered. Part 3 pertained to the assessment of various other candidate brain-injury metrics. Specifically, Parts 1 and 2 were revisited for HIC15, translation acceleration (TA), rotational acceleration (RA), rotational velocity (RV), and a different rotational brain injury criterion from NHTSA (BRIC). The rank-ordered results for the 21 assessments for each metric were as follows: RA, HIC15, BRIC, TA, BrIC, and RV. Therefore, of the six studied sets of OLR-based risk curves, the set for rotational acceleration demonstrated the best performance relative to NASS.

  19. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  20. Bayesian semiparametric estimation of covariate-dependent ROC curves

    PubMed Central

    Rodríguez, Abel; Martínez, Julissa C.

    2014-01-01

    Receiver operating characteristic (ROC) curves are widely used to measure the discriminating power of medical tests and other classification procedures. In many practical applications, the performance of these procedures can depend on covariates such as age, naturally leading to a collection of curves associated with different covariate levels. This paper develops a Bayesian heteroscedastic semiparametric regression model and applies it to the estimation of covariate-dependent ROC curves. More specifically, our approach uses Gaussian process priors to model the conditional mean and conditional variance of the biomarker of interest for each of the populations under study. The model is illustrated through an application to the evaluation of prostate-specific antigen for the diagnosis of prostate cancer, which contrasts the performance of our model against alternative models. PMID:24174579

  1. Hierarchical data-driven approach to fitting numerical relativity data for nonprecessing binary black holes with an application to final spin and radiated energy

    NASA Astrophysics Data System (ADS)

    Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael

    2017-03-01

    Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.

  2. Short- and Mid-Term Effects of Violent Victimization on Delinquency: A Multilevel Growth-Curve Modeling Approach.

    PubMed

    Kim, Young S; Lo, Celia C

    2016-10-01

    The present study investigates how adolescents' experiences of violent victimization exert short- and mid-term effects on their involvement in delinquency. The study compares and contrasts delinquency trajectories of youths whose experiences of violent victimization differ. A multilevel growth-curve modeling approach is applied to analyze data from five waves of the National Youth Survey. The results show that, although delinquency involvement increases as youths experience violent victimization, delinquency trajectories differ with the type of violent victimization, specifically, parental versus non-parental victimization. Violent victimization by parents produced a sharp initial decline in delinquency (short-term effect) followed by a rapid acceleration (mid-term effect). In turn, non-parental violence showed a stable trend over time. The findings have important implications for prevention and treatment services. © The Author(s) 2015.

  3. Linking the Climate and Thermal Phase Curve of 55 Cancri e

    NASA Astrophysics Data System (ADS)

    Hammond, Mark; Pierrehumbert, Raymond T.

    2017-11-01

    The thermal phase curve of 55 Cancri e is the first measurement of the temperature distribution of a tidally locked super-Earth, but raises a number of puzzling questions about the planet’s climate. The phase curve has a high amplitude and peak offset, suggesting that it has a significant eastward hot-spot shift as well as a large day-night temperature contrast. We use a general circulation model to model potential climates, and investigate the relation between bulk atmospheric composition and the magnitude of these seemingly contradictory features. We confirm theoretical models of tidally locked circulation are consistent with our numerical model of 55 Cnc e, and rule out certain atmospheric compositions based on their thermodynamic properties. Our best-fitting atmosphere has a significant hot-spot shift and day-night contrast, although these are not as large as the observed phase curve. We discuss possible physical processes that could explain the observations, and show that night-side cloud formation from species such as SiO from a day-side magma ocean could potentially increase the phase curve amplitude and explain the observations. We conclude that the observations could be explained by an optically thick atmosphere with a low mean molecular weight, a surface pressure of several bars, and a strong eastward circulation, with night-side cloud formation a possible explanation for the difference between our model and the observations.

  4. Corvettes, Curve Fitting, and Calculus

    ERIC Educational Resources Information Center

    Murawska, Jaclyn M.; Nabb, Keith A.

    2015-01-01

    Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…

  5. Regionalisation of low flow frequency curves for the Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Mamun, Abdullah A.; Hashim, Alias; Daoud, Jamal I.

    2010-02-01

    SUMMARYRegional maps and equations for the magnitude and frequency of 1, 7 and 30-day low flows were derived and are presented in this paper. The river gauging stations of neighbouring catchments that produced similar low flow frequency curves were grouped together. As such, the Peninsular Malaysia was divided into seven low flow regions. Regional equations were developed using the multivariate regression technique. An empirical relationship was developed for mean annual minimum flow as a function of catchment area, mean annual rainfall and mean annual evaporation. The regional equations exhibited good coefficient of determination ( R2 > 0.90). Three low flow frequency curves showing the low, mean and high limits for each region were proposed based on a graphical best-fit technique. Knowing the catchment area, mean annual rainfall and evaporation in the region, design low flows of different durations can be easily estimated for the ungauged catchments. This procedure is expected to overcome the problem of data unavailability in estimating low flows in the Peninsular Malaysia.

  6. Longitudinal associations between exercise identity and exercise motivation: A multilevel growth curve model approach.

    PubMed

    Ntoumanis, N; Stenling, A; Thøgersen-Ntoumani, C; Vlachopoulos, S; Lindwall, M; Gucciardi, D F; Tsakonitis, C

    2018-02-01

    Past work linking exercise identity and exercise motivation has been cross-sectional. This is the first study to model the relations between different types of exercise identity and exercise motivation longitudinally. Understanding the dynamic associations between these sets of variables has implications for theory development and applied research. This was a longitudinal survey study. Participants were 180 exercisers (79 men, 101 women) from Greece, who were recruited from fitness centers and were asked to complete questionnaires assessing exercise identity (exercise beliefs and role-identity) and exercise motivation (intrinsic, identified, introjected, external motivation, and amotivation) three times within a 6 month period. Multilevel growth curve modeling examined the role of motivational regulations as within- and between-level predictors of exercise identity, and a model in which exercise identity predicted exercise motivation at the within- and between-person levels. Results showed that within-person changes in intrinsic motivation, introjected, and identified regulations were positively and reciprocally related to within-person changes in exercise beliefs; intrinsic motivation was also a positive predictor of within-person changes in role-identity but not vice versa. Between-person differences in the means of predictor variables were predictive of initial levels and average rates of change in the outcome variables. The findings show support to the proposition that a strong exercise identity (particularly exercise beliefs) can foster motivation for behaviors that reinforce this identity. We also demonstrate that such relations can be reciprocal overtime and can depend on the type of motivation in question as well as between-person differences in absolute levels of these variables. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Efficient occupancy model-fitting for extensive citizen-science data.

    PubMed

    Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  8. Efficient occupancy model-fitting for extensive citizen-science data

    PubMed Central

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  9. Gamma-Ray Light Curves from Pulsar Magnetospheres with Finite Conductivity

    NASA Technical Reports Server (NTRS)

    Harding, A. K.; Kalapotharakos, C.; Kazanas, D.; Contopoulos, I.

    2012-01-01

    The Fermi Large Area Telescope has provided an unprecedented database for pulsar emission studies that includes gamma-ray light curves for over 100 pulsars. Modeling these light curves can reveal and constrain the geometry of the particle accelerator, as well as the pulsar magnetic field structure. We have constructed 3D magnetosphere models with finite conductivity, that bridge the extreme vacuum and force-free solutions used in previous light curves modeling. We are investigating the shapes of pulsar gamma-ray light curves using these dissipative solutions with two different approaches: (l) assuming geometric emission patterns of the slot gap and outer gap, and (2) using the parallel electric field provided by the resistive models to compute the trajectories and . emission of the radiating particles. The light curves using geometric emission patterns show a systematic increase in gamma-ray peak phase with increasing conductivity, introducing a new diagnostic of these solutions. The light curves using the model electric fields are very sensitive to the conductivity but do not resemble the observed Fermi light curves, suggesting that some screening of the parallel electric field, by pair cascades not included in the models, is necessary

  10. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  11. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  12. Virus fitness differences observed between two naturally occurring isolates of Ebola virus Makona variant using a reverse genetics approach.

    PubMed

    Albariño, César G; Guerrero, Lisa Wiggleton; Chakrabarti, Ayan K; Kainulainen, Markus H; Whitmer, Shannon L M; Welch, Stephen R; Nichol, Stuart T

    2016-09-01

    During the large outbreak of Ebola virus disease that occurred in Western Africa from late 2013 to early 2016, several hundred Ebola virus (EBOV) genomes have been sequenced and the virus genetic drift analyzed. In a previous report, we described an efficient reverse genetics system designed to generate recombinant EBOV based on a Makona variant isolate obtained in 2014. Using this system, we characterized the replication and fitness of 2 isolates of the Makona variant. These virus isolates are nearly identical at the genetic level, but have single amino acid differences in the VP30 and L proteins. The potential effects of these differences were tested using minigenomes and recombinant viruses. The results obtained with this approach are consistent with the role of VP30 and L as components of the EBOV RNA replication machinery. Moreover, the 2 isolates exhibited clear fitness differences in competitive growth assays. Published by Elsevier Inc.

  13. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  14. Online Influence and Sentiment of Fitness Tweets: Analysis of Two Million Fitness Tweets.

    PubMed

    Vickey, Theodore; Breslin, John G

    2017-10-31

    Publicly available fitness tweets may provide useful and in-depth insights into the real-time sentiment of a person's physical activity and provide motivation to others through online influence. The goal of this experimental approach using the fitness Twitter dataset is two-fold: (1) to determine if there is a correlation between the type of activity tweet (either workout or workout+, which contains the same information as a workout tweet but has additional user-generated information), gender, and one's online influence as measured by Klout Score and (2) to examine the sentiment of the activity-coded fitness tweets by looking at real-time shared thoughts via Twitter regarding their experiences with physical activity and the associated mobile fitness app. The fitness tweet dataset includes demographic and activity data points, including minutes of activity, Klout Score, classification of each fitness tweet, the first name of each fitness tweet user, and the tweet itself. Gender for each fitness tweet user was determined by a first name comparison with the US Social Security Administration database of first names and gender. Over 184 days, 2,856,534 tweets were collected in 23 different languages. However, for the purposes of this study, only the English-language tweets were analyzed from the activity tweets, resulting in a total of 583,252 tweets. After assigning gender to Twitter usernames based on the Social Security Administration database of first names, analysis of minutes of activity by both gender and Klout influence was determined. The mean Klout Score for those who shared their workout data from within four mobile apps was 20.50 (13.78 SD), less than the general Klout Score mean of 40, as was the Klout Score at the 95th percentile (40 vs 63). As Klout Score increased, there was a decrease in the number of overall workout+ tweets. With regards to sentiment, fitness-related tweets identified as workout+ reflected a positive sentiment toward physical activity

  15. Online Influence and Sentiment of Fitness Tweets: Analysis of Two Million Fitness Tweets

    PubMed Central

    2017-01-01

    Background Publicly available fitness tweets may provide useful and in-depth insights into the real-time sentiment of a person’s physical activity and provide motivation to others through online influence. Objective The goal of this experimental approach using the fitness Twitter dataset is two-fold: (1) to determine if there is a correlation between the type of activity tweet (either workout or workout+, which contains the same information as a workout tweet but has additional user-generated information), gender, and one’s online influence as measured by Klout Score and (2) to examine the sentiment of the activity-coded fitness tweets by looking at real-time shared thoughts via Twitter regarding their experiences with physical activity and the associated mobile fitness app. Methods The fitness tweet dataset includes demographic and activity data points, including minutes of activity, Klout Score, classification of each fitness tweet, the first name of each fitness tweet user, and the tweet itself. Gender for each fitness tweet user was determined by a first name comparison with the US Social Security Administration database of first names and gender. Results Over 184 days, 2,856,534 tweets were collected in 23 different languages. However, for the purposes of this study, only the English-language tweets were analyzed from the activity tweets, resulting in a total of 583,252 tweets. After assigning gender to Twitter usernames based on the Social Security Administration database of first names, analysis of minutes of activity by both gender and Klout influence was determined. The mean Klout Score for those who shared their workout data from within four mobile apps was 20.50 (13.78 SD), less than the general Klout Score mean of 40, as was the Klout Score at the 95th percentile (40 vs 63). As Klout Score increased, there was a decrease in the number of overall workout+ tweets. With regards to sentiment, fitness-related tweets identified as workout+ reflected a

  16. Modeling Patterns of Activities using Activity Curves

    PubMed Central

    Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen

    2016-01-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990

  17. Modeling Patterns of Activities using Activity Curves.

    PubMed

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-06-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.

  18. The Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT): An Open-source Tool for Efficient Fitting of Astrometric and Radial Velocity Data

    NASA Astrophysics Data System (ADS)

    Mede, Kyle; Brandt, Timothy D.

    2017-03-01

    We present the Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT), a new, open-source suite to fit the orbital elements of planetary or stellar-mass companions to any combination of radial velocity and astrometric data. To explore the parameter space of Keplerian models, ExoSOFT may be operated with its own multistage sampling approach or interfaced with third-party tools such as emcee. In addition, ExoSOFT is packaged with a collection of post-processing tools to analyze and summarize the results. Although only a few systems have been observed with both radial velocity and direct imaging techniques, this number will increase, thanks to upcoming spacecraft and ground-based surveys. Providing both forms of data enables simultaneous fitting that can help break degeneracies in the orbital elements that arise when only one data type is available. The dynamical mass estimates this approach can produce are important when investigating the formation mechanisms and subsequent evolution of substellar companions. ExoSOFT was verified through fitting to artificial data and was implemented using the Python and Cython programming languages; it is available for public download at https://github.com/kylemede/ExoSOFT under GNU General Public License v3.

  19. Modal testing with Asher's method using a Fourier analyzer and curve fitting

    NASA Technical Reports Server (NTRS)

    Gold, R. R.; Hallauer, W. L., Jr.

    1979-01-01

    An unusual application of the method proposed by Asher (1958) for structural dynamic and modal testing is discussed. Asher's method has the capability, using the admittance matrix and multiple-shaker sinusoidal excitation, of separating structural modes having indefinitely close natural frequencies. The present application uses Asher's method in conjunction with a modern Fourier analyzer system but eliminates the necessity of exciting the test structure simultaneously with several shakers. Evaluation of this approach with numerically simulated data demonstrated its effectiveness; the parameters of two modes having almost identical natural frequencies were accurately identified. Laboratory evaluation of this approach was inconclusive because of poor experimental input data.

  20. The Effect of an Offset Polar Cap Dipolar Magnetic Field on the Modeling of the Vela Pulsar's Gamma-Ray Light Curves

    NASA Technical Reports Server (NTRS)

    Barnard, M.; Venter, C.; Harding, A. K.

    2016-01-01

    We performed geometric pulsar light curve modeling using static, retarded vacuum, and offset polar cap (PC) dipole B-fields (the latter is characterized by a parameter epsilon), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole B-field mimics deviations from the static dipole (which corresponds to epsilon equals 0). In addition to constant-emissivity geometric models, we also considered a slot gap (SG) E-field associated with the offset-PC dipole B-field and found that its inclusion leads to qualitatively different light curves. Solving the particle transport equation shows that the particle energy only becomes large enough to yield significant curvature radiation at large altitudes above the stellar surface, given this relatively low E-field. Therefore, particles do not always attain the radiation-reaction limit. Our overall optimal light curve fit is for the retarded vacuum dipole field and OG model, at an inclination angle alpha equals 78 plus or minus 1 degree and observer angle zeta equals 69 plus 2 degrees or minus 1 degree. For this B-field, the TPC model is statistically disfavored compared to the OG model. For the static dipole field, neither model is significantly preferred. We found that smaller values of epsilon are favored for the offset-PC dipole field when assuming constant emissivity, and larger epsilon values favored for variable emissivity, but not significantly so. When multiplying the SG E-field by a factor of 100, we found improved light curve fits, with alpha and zeta being closer to best fits from independent studies, as well as curvature radiation reaction at lower altitudes.