ERIC Educational Resources Information Center
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Tasel, Serdar F.; Hassanpour, Reza; Mumcuoglu, Erkan U.; Perkins, Guy C.; Martone, Maryann
2014-03-01
Mitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of nonmitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme. Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Cubic spline functions for curve fitting
NASA Technical Reports Server (NTRS)
Young, J. D.
1972-01-01
FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.
NASA Astrophysics Data System (ADS)
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Fast curve fitting using neural networks
NASA Astrophysics Data System (ADS)
Bishop, C. M.; Roach, C. M.
1992-10-01
Neural networks provide a new tool for the fast solution of repetitive nonlinear curve fitting problems. In this article we introduce the concept of a neural network, and we show how such networks can be used for fitting functional forms to experimental data. The neural network algorithm is typically much faster than conventional iterative approaches. In addition, further substantial improvements in speed can be obtained by using special purpose hardware implementations of the network, thus making the technique suitable for use in fast real-time applications. The basic concepts are illustrated using a simple example from fusion research, involving the determination of spectral line parameters from measurements of B iv impurity radiation in the COMPASS-C tokamak.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Curve Fit Technique for a Smooth Curve Using Gaussian Sections.
1983-08-01
curve-fitting. Furthermore, the algorithm that does the fitting is simple enough to be used on a programmable calculator . 8 -I.F , A X i 4. Y-14 .4. - -* F.J OR;r IF 17 r*~~ , ac ~J ’a vt. . S ~ :.. *~All, a-4k .16’.- a1 1, t
NASA Astrophysics Data System (ADS)
Ogren, Paul; Davis, Brian; Guy, Nick
2001-06-01
A spreadsheet approach is used to fit multilinear functions with three adjustable parameters: ƒ = a1X1(x) + a2X2(x) + a3X3(x). Results are illustrated for three familiar examples: IR analysis of gaseous DCl, the electronic/vibrational spectrum of gaseous I2, and van Deemter plots of chromatographic data. These cases are simple enough for students in upper-level physical or advanced analytical courses to write and modify their own spreadsheets. In addition to the original x, y, and sy values, 12 columns are required: three for Xn(xi) values, six for Xn(xi)Xk(xi) product sums for the curvature matrix [a], and three for yi Xn(xi) sums for (b) in the vector equation (b) = [a](a). The Excel spreadsheet MINVERSE function provides the [e] error matrix from [a]. The [e] elements are then used to determine best-fit parameter values contained in (a). These spreadsheets also use a "dimensionless" or "reduced parameter" approach in calculating parameter weights, uncertainties, and correlations. Students can later enter data sets and fit parameters into a larger spreadsheet that uses Monte Carlo techniques to produce two-dimensional scatter plots. These correspond to Dc2 ellipsoidal cross-sections or projections and provide visual depictions of parameter uncertainties and correlations. The Monte Carlo results can also be used to estimate confidence envelopes for fitting plots.
GPU accelerated curve fitting with IDL
NASA Astrophysics Data System (ADS)
Galloy, M.
2012-12-01
Curve fitting is a common mathematical calculation done in all scientific areas. The Interactive Data Language (IDL) is also widely used in this community for data analysis and visualization. We are creating a general-purpose, GPU accelerated curve fitting library for use from within IDL. We have developed GPULib, a library of routines in IDL for accelerating common scientific operations including arithmetic, FFTs, interpolation, and others. These routines are accelerated using modern GPUs using NVIDIA's CUDA architecture. We will add curve fitting routines to the GPULib library suite, making curve fitting much faster. In addition, library routines required for efficient curve fitting will also be generally useful to other users of GPULib. In particular, a GPU accelerated LAPACK implementation such as MAGMA is required for the Levenberg-Marquardt curve fitting and is commonly used in many other scientific computations. Furthermore, the ability to evaluate custom expressions at runtime necessary for specifying a function model will be useful for users in all areas.
Least-squares fitting Gompertz curve
NASA Astrophysics Data System (ADS)
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Spectral curve fitting of dielectric constants
NASA Astrophysics Data System (ADS)
Ruzi, M.; Ennis, C.; Robertson, E. G.
2017-01-01
Optical constants are important properties governing the response of a material to incident light. It follows that they are often extracted from spectra measured by absorbance, transmittance or reflectance. One convenient method to obtain optical constants is by curve fitting. Here, model curves should satisfy Kramer-Kronig relations, and preferably can be expressed in closed form or easily calculable. In this study we use dielectric constants of three different molecular ices in the infrared region to evaluate four different model curves that are generally used for fitting optical constants: (1) the classical damped harmonic oscillator, (2) Voigt line shape, (3) Fourier series, and (4) the Triangular basis. Among these, only the classical damped harmonic oscillator model strictly satisfies the Kramer-Kronig relation. If considering the trade-off between accuracy and speed, Fourier series fitting is the best option when spectral bands are broad while for narrow peaks the classical damped harmonic oscillator and the Triangular basis fitting model are the best choice.
Modeling and Fitting Exoplanet Transit Light Curves
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, G. T.
2013-01-01
We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the
Multivariate curve-fitting in GAUSS
Bunck, C.M.; Pendleton, G.W.
1988-01-01
Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.
Basic Searching, Interpolating, and Curve-Fitting Algorithms in C++
2015-01-01
Basic Searching, Interpolating, and Curve- Fitting Algorithms in C++ by Robert J Yager ARL-TN-0657 January 2015...Interpolating, and Curve- Fitting Algorithms in C++ Robert J Yager Weapons and Materials Research Directorate, ARL...SUBTITLE Basic Searching, Interpolating, and Curve- Fitting Algorithms in C++ 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-03-28
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Simplified curve fits for the transport properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.
1987-01-01
New, improved curve fits for the transport properties of equilibruim air have been developed. The curve fits are for viscosity and Prandtl number as functions of temperature and density, and viscosity and thermal conductivity as functions of internal energy and density. The curve fits were constructed using grabau-type transition functions to model the tranport properties of Peng and Pindroh. The resulting curve fits are sufficiently accurate and self-contained so that they can be readily incorporated into new or existing computational fluid dynamics codes. The range of validity of the new curve fits are temperatures up to 15,000 K densities from 10 to the -5 to 10 amagats (rho/rho sub o).
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm.
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-09-30
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.
Curve fitting for RHB Islamic Bank annual net profit
NASA Astrophysics Data System (ADS)
Nadarajan, Dineswary; Noor, Noor Fadiya Mohd
2015-05-01
The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.
Mössbauer spectral curve fitting combining fundamentally different techniques
NASA Astrophysics Data System (ADS)
Susanto, Ferry; de Souza, Paulo
2016-10-01
We propose the use of fundamentally distinctive techniques to solve the problem of curve fitting a Mössbauer spectrum. The techniques we investigated are: evolutionary algorithm, basin hopping, and hill climbing. These techniques were applied in isolation and combined to fit different shapes of Mössbauer spectra. The results indicate that complex Mössbauer spectra can be automatically curve fitted using minimum user input, and combination of these techniques achieved the best performance (lowest statistical error). The software and sample of Mössbauer spectra have been made available through a link at the reference.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Viscosity Coefficient Curve Fits for Ionized Gas Species Grant Palmer
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
Viscosity coefficient curve fits for neutral gas species are available from many sources. Many do a good job of reproducing experimental and computational chemistry data. The curve fits are usually expressed as a function of temperature only. This is consistent with the governing equations used to derive an expression for the neutral species viscosity coefficient. Ionized species pose a more complicated problem. They are subject to electrostatic as well as intermolecular forces. The electrostatic forces are affected by a shielding phenomenon where electrons shield the electrostatic forces of positively charged ions beyond a certain distance. The viscosity coefficient for an ionized gas species is a function of both temperature and local electron number density. Currently available curve fits for ionized gas species, such as those presented by Gupta/Yos, are a function of temperature only. What they did was to assume an electron number density. The problem is that the electron number density they assumed was unrealistically high. The purpose of this paper is two-fold. First, the proper expression for determining the viscosity coefficient of an ionized species as a function of both temperature and electron number density will be presented. Then curve fit coefficients will be developed using the more realistic assumption of an equilibrium electron number density. The results will be compared against previous curve fits and against highly accurate computational chemistry data.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W^{-1} is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
[Curve-fit with hybrid logistic function for intracellular calcium transient].
Mizuno, Ju; Morita, Shigeho; Araki, Junichi; Otsuji, Mikiya; Hanaoka, Kazuo; Kurihara, Satoshi
2009-01-01
As the left ventricular (LV) pressure curve and myocardial tension curve in heart are composed of contraction and relaxation processes, we have found that hybrid logistic (HL) function calculated as the difference between two logistic functions curve-fits better the isovolumic LV pressure curve and the isometric twitch tension curve than the conventional polynomial exponential and sinusoidal functions. Increase and decrease in intracellular Ca2+ concentration regulate myocardial contraction and relaxation. Recently, we reported that intracellular Ca2+ transient (CaT) curves measured using the calcium-sensitive bioluminescent protein, aequorin, were better curve-fitted by HL function compared to the polynomial exponential function in the isolated rabbit RV and mouse LV papillary muscles. We speculate that the first logistic component curve of HL fit represents the concentration of the Ca2+ inflow into the cytoplasmic space, the concentration of Ca2+ released from sarcoplasmic reticulum (SR), the concentration of Ca2+ binding to troponin C (TnC), and the attached number of cross-bridge (CB) and their time courses, and that the second logistic component curve of HL fit represents the concentration of Ca2+ sequestered into SR, the concentration of Ca2+ removal from the cytoplasmic space, the concentration of Ca2+ released from TnC, and the detached number of CB and their time courses. This HL approach for CaT curve may provide a more useful model for investigating Ca2+ handling, Ca(2+) -TnC interaction, and CB cycling.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
BGFit: management and automated fitting of biological growth curves
2013-01-01
Background Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. Results BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. Conclusions BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity. PMID:24067087
High-order wide-band frequency domain identification using composite curve fitting
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A method is presented for curve fitting nonparametric frequency domain data so as to identify a parametric model composed of two models in parallel, where each model has dynamics in a specified portion of the frequency band. This decomposition overcomes the problem of numerical sensitivity since lower order polynomials can be used compared to existing methods which estimate the model as a single entity. Consequently, composite curve fitting is useful for frequency domain identification of high-order systems and/or systems whose dynamics are spread over a large bandwidth. The approach can be extended to identify an arbitrary number of parallel subsystems in specified frequency regimes.
An automated fitting procedure and software for dose-response curves with multiphasic features
Veroli, Giovanni Y. Di; Fornari, Chiara; Goldlust, Ian; Mills, Graham; Koh, Siang Boon; Bramhall, Jo L; Richards, Frances M.; Jodrell, Duncan I.
2015-01-01
In cancer pharmacology (and many other areas), most dose-response curves are satisfactorily described by a classical Hill equation (i.e. 4 parameters logistical). Nevertheless, there are instances where the marked presence of more than one point of inflection, or the presence of combined agonist and antagonist effects, prevents straight-forward modelling of the data via a standard Hill equation. Here we propose a modified model and automated fitting procedure to describe dose-response curves with multiphasic features. The resulting general model enables interpreting each phase of the dose-response as an independent dose-dependent process. We developed an algorithm which automatically generates and ranks dose-response models with varying degrees of multiphasic features. The algorithm was implemented in new freely available Dr Fit software (sourceforge.net/projects/drfit/). We show how our approach is successful in describing dose-response curves with multiphasic features. Additionally, we analysed a large cancer cell viability screen involving 11650 dose-response curves. Based on our algorithm, we found that 28% of cases were better described by a multiphasic model than by the Hill model. We thus provide a robust approach to fit dose-response curves with various degrees of complexity, which, together with the provided software implementation, should enable a wide audience to easily process their own data. PMID:26424192
Curve fitting using logarithmic function for sea bed logging data
NASA Astrophysics Data System (ADS)
Daud, Hanita; Razali, Radzuan; Zaki, M. Ridhwan O.; Shafie, Afza
2014-10-01
The aim of this research work is to conduct curve fitting using mathematical equations that relate location of the hydrocarbon (HC) at different depth to different frequencies. COMSOL MultiPhysics software was used to generate models of the seabed logging technique which consists of air, sea water, sediment and HC layer. Seabed Logging (SBL) is a technique to find the resistive layers under seabed by transmitting low frequency of EM waves through sea water and sediment. As HC is known to have high resistivity which is about 30-500Ωm, EM waves will be guided and reflected back and detected by the receiver that are placed on the seafloor. In SBL, low frequency is used to obtain greater wavelength which allows EM waves to penetrate at longer distance and each frequency used has different skin depth. The frequencies used in this project were 0.5Hz, 0.25Hz, 0.125Hz and 0.0625Hz and the depths of the HC were varied from 1000m to 3000m with increment of 250m. Data generated from the simulations using COMSOL software was extracted for the set up with and without HC and few trend lines were developed and R2 were calculated for each equation and curve. The calculated R2 were compared between data with HC to no HC at each depth and it was found that the calculated R2 values were very well fitted for deeper HC depth. This indicates that as depth of HC is higher, it is difficult to distinguish data with and without HC presence; and perhaps a new technique can be explored.
Dose-response curve estimation: a semiparametric mixture approach.
Yuan, Ying; Yin, Guosheng
2011-12-01
In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples.
NASA Astrophysics Data System (ADS)
Liu, Chun-Hung; Ng, Hoi-Tou; Ng, Philip C. W.; Tsai, Kuen-Yu; Lin, Shy-Jay; Chen, Jeng-Homg
2008-11-01
Accelerating voltage as low as 5 kV for operation of the electron-beam micro-columns as well as solving the throughput problem is being considered for high-throughput direct-write lithography for the 22-nm half-pitch node and beyond. The development of efficient proximity effect correction (PEC) techniques at low-voltage is essential to the overall technology. For realization of this approach, a thorough understanding of electron scattering in solids, as well as precise data for fitting energy intensity distribution in the resist are needed. Although electron scattering has been intensively studied, we found that the conventional gradient based curve-fitting algorithms, merit functions, and performance index (PI) of the quality of the fit were not a well posed procedure from simulation results. Therefore, we proposed a new fitting procedure adopting a direct search fitting algorithm with a novel merit function. This procedure can effectively mitigate the difficulty of conventional gradient based curve-fitting algorithm. It is less sensitive to the choice of the trial parameters. It also avoids numerical problems and reduces fitting errors. We also proposed a new PI to better describe the quality of the fit than the conventional chi-square PI. An interesting result from applying the proposed procedure showed that the expression of absorbed electron energy density in 5keV cannot be well represented by conventional multi-Gaussian models. Preliminary simulation shows that a combination of a single Gaussian and double exponential functions can better represent low-voltage electron scattering.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Curve fitting of aeroelastic transient response data with exponential functions
NASA Technical Reports Server (NTRS)
Bennett, R. M.; Desmarais, R. N.
1976-01-01
The extraction of frequency, damping, amplitude, and phase information from unforced transient response data is considered. These quantities are obtained from the parameters determined by fitting the digitized time-history data in a least-squares sense with complex exponential functions. The highlights of the method are described, and the results of several test cases are presented. The effects of noise are considered both by using analytical examples with random noise and by estimating the standard deviation of the parameters from maximum-likelihood theory.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Xu, Weiyin; Chen, Kejia; Liang, Dayang; Chew, Wee
2009-04-01
A soft-modeling multivariate numerical approach that combines self-modeling curve resolution (SMCR) and mixed Lorentzian-Gaussian curve fitting was successfully implemented for the first time to elucidate spatially and spectroscopically resolved spectral information from infrared imaging data of oral mucosa cells. A novel variant form of the robust band-target entropy minimization (BTEM) SMCR technique, coined as hierarchical BTEM (hBTEM), was introduced to first cluster similar cellular infrared spectra using the unsupervised hierarchical leader-follower cluster analysis (LFCA) and subsequently apply BTEM to clustered subsets of data to reconstruct three protein secondary structure (PSS) pure component spectra-alpha-helix, beta-sheet, and ambiguous structures-that associate with spatially differentiated regions of the cell infrared image. The Pearson VII curve-fitting procedure, which approximates a mixed Lorentzian-Gaussian model for spectral band shape, was used to optimally curve fit the resolved amide I and II bands of various hBTEM reconstructed PSS pure component spectra. The optimized Pearson VII band-shape parameters and peak center positions serve as means to characterize amide bands of PSS spectra found in various cell locations and for approximating their actual amide I/II intensity ratios. The new hBTEM methodology can also be potentially applied to vibrational spectroscopic datasets with dynamic or spatial variations arising from chemical reactions, physical perturbations, pathological states, and the like.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
A grid algorithm for high throughput fitting of dose-response curve data.
Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung
2010-10-21
We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program's performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences.
An Algorithm for Obtaining Reliable Priors for Constrained-Curve Fits
Terrence Draper; Shao-Jing Dong; Ivan Horvath; Frank Lee; Nilmani Mathur; Jianbo Zhang
2004-03-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-chi-square fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. We illustrate the efficacy of the method with data from overlap fermions, on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx} 180 MeV.
Baushke, Samuel W; Stedtfeld, Robert D; Tourlousse, Dieter M; Ahmad, Farhan; Wick, Lukas M; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2012-01-01
Non-equilibrium dissociation curves (NEDCs) have the potential to identify non-specific hybridizations on high throughput, diagnostic microarrays. We report a simple method for identification of non-specific signals by using a new parameter that does not rely on comparison of perfect match and mismatch dissociations. The parameter is the ratio of specific dissociation temperature (Td-w) to theoretical melting temperature (Tm) and can be obtained by automated fitting of a four-parameter, sigmoid, empirical equation to the thousands of curves generated in a typical experiment. The curves fit perfect match NEDCs from an initial experiment with an R2 of 0.998±0.006 and root mean square of 108±91 fluorescent units. Receiver operating characteristic curve analysis showed low temperature hybridization signals (20–48 °C) to be as effective as area under the curve as primary data filters. Evaluation of three datasets that target 16S rRNA and functional genes with varying degrees of target sequence similarity showed that filtering out hybridizations with Td-w/Tm < 0.78 greatly reduced false positive results. In conclusion, Td-w/Tm successfully screened many non-specific hybridizations that could not be identified using single temperature signal intensities alone, while the empirical modeling allowed a simplified approach to the high throughput analysis of thousands of NEDCs. PMID:22537822
Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis
ERIC Educational Resources Information Center
Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.
2010-01-01
The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo, Urbano
2000-01-01
Describes a program for computing different person-fit measures under different parametric item response models for binary items. The indexes can be computed for the Rasch model and the two- and three-parameter logistic models. The program can plot person response curves to allow the researchers to investigate the nonfitting response behavior of…
Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.
Mohammed, Goran Abdulrahman; Hou, Ming
2016-03-01
The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies.
Thermal performance curves of Paramecium caudatum: a model selection approach.
Krenek, Sascha; Berendonk, Thomas U; Petzoldt, Thomas
2011-05-01
The ongoing climate change has motivated numerous studies investigating the temperature response of various organisms, especially that of ectotherms. To correctly describe the thermal performance of these organisms, functions are needed which sufficiently fit to the complete optimum curve. Surprisingly, model-comparisons for the temperature-dependence of population growth rates of an important ectothermic group, the protozoa, are still missing. In this study, temperature reaction norms of natural isolates of the freshwater protist Paramecium caudatum were investigated, considering nearly the entire temperature range. These reaction norms were used to estimate thermal performance curves by applying a set of commonly used model functions. An information theory approach was used to compare models and to identify the best ones for describing these data. Our results indicate that the models which can describe negative growth at the high- and low-temperature branch of an optimum curve are preferable. This is a prerequisite for accurately calculating the critical upper and lower thermal limits. While we detected a temperature optimum of around 29 °C for all investigated clonal strains, the critical thermal limits were considerably different between individual clones. Here, the tropical clone showed the narrowest thermal tolerance, with a shift of its critical thermal limits to higher temperatures.
STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.
Thornhill, D P; Schwerzel, E
1985-05-01
A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence.
Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers
NASA Astrophysics Data System (ADS)
Tananaev, N. I.
2015-03-01
Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
A numerical method for biphasic curve fitting with a programmable calculator.
Ristanović, D; Ristanović, D; Malesević, J; Milutinović, B
1982-01-01
Elimination kinetics of bromsulphalein (BSP) after a single injection into the circulation of rats were examined by means of a four-compartment model. BSP plasma concentrations were measured colorimetrically. A program written for the Texas Instruments TI-59 programmable calculator is presented, which will calculate the fractional blood clearance of BSP using an iteration procedure. A simple method of fitting biphasic decay curves to experimental data is also proposed.
Liao, Fei; Tian, Kao-Cong; Yang, Xiao; Zhou, Qi-Xin; Zeng, Zhao-Chun; Zuo, Yu-Ping
2003-03-01
The reliability of kinetic substrate quantification by nonlinear fitting of the enzyme reaction curve to the integrated Michaelis-Menten equation was investigated by both simulation and preliminary experimentation. For simulation, product absorptivity epsilon was 3.00 mmol(-1) L cm(-1) and K(m) was 0.10 mmol L(-1), and uniform absorbance error sigma was randomly inserted into the error-free reaction curve of product absorbance A(i) versus reaction time t(i) calculated according to the integrated Michaelis-Menten equation. The experimental reaction curve of arylesterase acting on phenyl acetate was monitored by phenol absorbance at 270 nm. Maximal product absorbance A(m) was predicted by nonlinear fitting of the reaction curve to Eq. (1) with K(m) as constant. There were unique A(m) for best fitting of both the simulated and experimental reaction curves. Neither the error in reaction origin nor the variation of enzyme activity changed the background-corrected value of A(m). But the range of data under analysis, the background absorbance, and absorbance error sigma had an effect. By simulation, A(m) from 0.150 to 3.600 was predicted with reliability and linear response to substrate concentration when there was 80% consumption of substrate at sigma of 0.001. Restriction of absorbance to 0.700 enabled A(m) up to 1.800 to be predicted at sigma of 0.001. Detection limit reached A(m) of 0.090 at sigma of 0.001. By experimentation, the reproducibility was 4.6% at substrate concentration twice the K(m), and A(m) linearly responded to phenyl acetate with consistent absorptivity for phenol, and upper limit about twice the maximum of experimental absorbance. These results supported the reliability of this new kinetic method for enzymatic analysis with enhanced upper limit and precision.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Estimating of equilibrium formation temperature by curve fitting method and it's problems
Kenso Takai; Masami Hyodo; Shinji Takasugi
1994-01-20
Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
A Healthy Approach to Fitness Center Security.
ERIC Educational Resources Information Center
Sturgeon, Julie
2000-01-01
Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)
Measurement of focused ultrasonic fields based on colour edge detection and curve fitting
NASA Astrophysics Data System (ADS)
Zhu, H.; Chang, S.; Yang, P.; He, L.
2016-03-01
This paper utilizes firstly both a scanning device and an optic fiber hydrophone to establish a measurement system, and then proposes the parameter measurement of the focused transducer based on edge detection of the visualized acoustic data and curve fitting. The measurement system consists of a water tank with wedge absorber, stepper motors driver, system controller, a focused transducer, an optic fiber hydrophone and data processing software. On the basis of the visualized processing for the original scanned data, the -3 dB beam width of the focused transducer is calculated using the edge detection of the acoustic visualized image and circle fitting method by minimizing algebraic distance. Experiments on the visualized ultrasound data are implemented to verify the feasibility of the proposed method. The data obtained from the scanning device are utilized to reconstruct acoustic fields, and it is found that the -3 dB beam width of the focused transducer can be predicted accurately.
Combined use of Tikhonov deconvolution and curve fitting for spectrogram interpretation
Morawski, R.Z.; Miekina, A.; Barwicz, A.
1996-12-31
The problem of numerical correction of spectrograms is addressed. A new method of correction is developed which consists of sequential use of the Tikhonov deconvolution algorithm, for estimating the positions of spectral peaks, and a curve-fitting algorithm, for estimating their magnitudes. The metrological and numerical properties of the proposed method for spectrogram interpretation are assessed by means of spectrometry-based criteria, using synthetic and real-world spectrograms. Conclusions are drawn concerning computational complexity and accuracy of the proposed method and its metrological applicability. 22 refs., 3 figs., 1 tab.
A novel curve fitting method for AV optimisation of biventricular pacemakers.
Dehbi, Hakim-Moulay; Jones, Siana; Sohaib, S M Afzal; Finegold, Judith A; Siggers, Jennifer H; Stegemann, Berthold; Whinnett, Zachary I; Francis, Darrel P
2015-09-01
In this study, we designed and tested a new algorithm, which we call the 'restricted parabola', to identify the optimum atrioventricular (AV) delay in patients with biventricular pacemakers. This algorithm automatically restricts the hemodynamic data used for curve fitting to the parabolic zone in order to avoid inadvertently selecting an AV optimum that is too long.We used R, a programming language and software environment for statistical computing, to create an algorithm which applies multiple different cut-offs to partition curve fitting of a dataset into a parabolic and a plateau region and then selects the best cut-off using a least squares method. In 82 patients, AV delay was adjusted and beat-to-beat systolic blood pressure (SBP) was measured non-invasively using our multiple-repetition protocol. The novel algorithm was compared to fitting a parabola across the whole dataset to identify how many patients had a plateau region, and whether a higher hemodynamic response was achieved with one method.In 9/82 patients, the restricted parabola algorithm detected that the pattern was not parabolic at longer AV delays. For these patients, the optimal AV delay predicted by the restricted parabola algorithm increased SBP by 1.36 mmHg above that predicted by the conventional parabolic algorithm (95% confidence interval: 0.65 to 2.07 mmHg, p-value = 0.002).AV optima selected using our novel restricted parabola algorithm give a greater improvement in acute hemodynamics than fitting a parabola across all tested AV delays. Such an algorithm may assist the development of automated methods for biventricular pacemaker optimisation.
Curve fitting toxicity test data: Which comes first, the dose response or the model?
Gully, J.; Baird, R.; Bottomley, J.
1995-12-31
The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.
NASA Astrophysics Data System (ADS)
Pineda, Juan C. B.; Hayward, Christopher C.; Springel, Volker; Mendes de Oliveira, Claudia
2017-04-01
We study the role of systematic effects in observational studies of the cusp-core problem under the minimum disc approximation using a suite of high-resolution (25-pc softening length) hydrodynamical simulations of dwarf galaxies. We mimic realistic kinematic observations and fit the mock rotation curves with two analytic models commonly used to differentiate cores from cusps in the dark matter distribution. We find that the cored pseudo-isothermal sphere (ISO) model is strongly favoured by the reduced χ ^2_ν of the fits in spite of the fact that our simulations contain cuspy Navarro-Frenk-White profiles (NFW). We show that even idealized measurements of the gas circular motions can lead to the incorrect answer if velocity underestimates induced by pressure support, with a typical size of order ∼5 km s-1 in the central kiloparsec, are neglected. Increasing the spatial resolution of the mock observations leads to more misleading results because the inner region, where the effect of pressure support is most significant, is better sampled. Fits to observations with a spatial resolution of 100 pc (2 arcsec at 10 Mpc) favour the ISO model in 78-90 per cent of the cases, while at 800-pc resolution, 41-77 per cent of the galaxies indicate the fictitious presence of a dark matter core. The coefficients of our best-fitting models agree well with those reported in observational studies; therefore, we conclude that NFW haloes cannot be ruled out reliably from this type of analysis.
Calculations and curve fits of thermodynamic and transport properties for equilibrium air to 30000 K
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Lee, Kam-Pui; Thompson, Richard A.; Yos, Jerrold M.
1991-01-01
A self-consistent set of equilibrium air values were computed for enthalpy, total specific heat at constant pressure, compressibility factor, viscosity, total thermal conductivity, and total Prandtl number from 500 to 30,000 K over a range of 10(exp -4) atm to 10(exp 2) atm. The mixture values are calculated from the transport and thermodynamic properties of the individual species provided in a recent study by the authors. The concentrations of the individual species, required in the mixture relations, are obtained from a free energy minimization calculation procedure. Present calculations are based on an 11-species air model. For pressures less than 10(exp -2) atm and temperatures of about 15,000 K and greater, the concentrations of N(++) and O(++) become important, and consequently, they are included in the calculations determining the various properties. The computed properties are curve fitted as a function of temperature at a constant value of pressure. These curve fits reproduce the computed values within 5 percent for the entire temperature range considered here at specific pressures and provide an efficient means for computing the flowfield properties of equilibrium air, provided the elemental composition remains constant at 0.24 for oxygen and 0.76 for nitrogen by mass.
Merlos Rodrigo, Miguel Angel; Molina-López, Jorge; Jimenez Jimenez, Ana Maria; Planells Del Pozo, Elena; Adam, Pavlina; Eckschlager, Tomas; Zitka, Ondrej; Richtera, Lukas; Adam, Vojtech
2017-01-01
The translation of metallothioneins (MTs) is one of the defense strategies by which organisms protect themselves from metal-induced toxicity. MTs belong to a family of proteins comprising MT-1, MT-2, MT-3, and MT-4 classes, with multiple isoforms within each class. The main aim of this study was to determine the behavior of MT in dependence on various externally modelled environments, using electrochemistry. In our study, the mass distribution of MTs was characterized using MALDI-TOF. After that, adsorptive transfer stripping technique with differential pulse voltammetry was selected for optimization of electrochemical detection of MTs with regard to accumulation time and pH effects. Our results show that utilization of 0.5 M NaCl, pH 6.4, as the supporting electrolyte provides a highly complicated fingerprint, showing a number of non-resolved voltammograms. Hence, we further resolved the voltammograms exhibiting the broad and overlapping signals using curve fitting. The separated signals were assigned to the electrochemical responses of several MT complexes with zinc(II), cadmium(II), and copper(II), respectively. Our results show that electrochemistry could serve as a great tool for metalloproteomic applications to determine the ratio of metal ion bonds within the target protein structure, however, it provides highly complicated signals, which require further resolution using a proper statistical method, such as curve fitting. PMID:28287470
Bayesian fitting of a logistic dose-response curve with numerically derived priors.
Huson, L W; Kinnersley, N
2009-01-01
In this report we describe the Bayesian analysis of a logistic dose-response curve in a Phase I study, and we present two simple and intuitive numerical approaches to construction of prior probability distributions for the model parameters. We combine these priors with the expert prior opinion and compare the results of the analyses with those obtained from the use of alternative prior formulations.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
High-resolution fiber optic temperature sensors using nonlinear spectral curve fitting technique.
Su, Z H; Gan, J; Yu, Q K; Zhang, Q H; Liu, Z H; Bao, J M
2013-04-01
A generic new data processing method is developed to accurately calculate the absolute optical path difference of a low-finesse Fabry-Perot cavity from its broadband interference fringes. The method combines Fast Fourier Transformation with nonlinear curve fitting of the entire spectrum. Modular functions of LabVIEW are employed for fast implementation of the data processing algorithm. The advantages of this technique are demonstrated through high performance fiber optic temperature sensors consisting of an infrared superluminescent diode and an infrared spectrometer. A high resolution of 0.01 °C is achieved over a large dynamic range from room temperature to 800 °C, limited only by the silica fiber used for the sensor.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
New Horizons approach photometry of Pluto and Charon: light curves and Solar phase curves
NASA Astrophysics Data System (ADS)
Zangari, A. M.; Buie, M. W.; Buratti, B. J.; Verbiscer, A.; Howett, C.; Weaver, H. A., Jr.; Olkin, C.; Ennico Smith, K.; Young, L. A.; Stern, S. A.
2015-12-01
While the most captivating images of Pluto and Charon were shot by NASA's New Horizons probe on July 14, 2015, the spacecraft also imaged Pluto with its LOng Range Reconnaissance Imager ("LORRI") during its Annual Checkouts and Approach Phases, with campaigns in July 2013, July 2014, January 2015, March 2015, April 2015, May 2015 and June 2015. All but the first campaign provided full coverage of Pluto's 6.4 day rotation. Even though many of these images were taken when surface features on Pluto and Charon were unresolved, these data provide a unique opportunity to study Pluto over a timescale of several months. Earth-based data from an entire apparition must be combined to create a single light curve, as Pluto is never otherwise continuously available for observing due to daylight, weather and scheduling. From the spacecraft, Pluto's sub-observer latitude remained constant to within 0.05 degrees of 43.15 degrees, comparable to a week's worth of change as seen from Earth near opposition. During the July 2013 to June 2015 period, Pluto's solar phase curve increased from 11 degrees to 15 degrees, a small range, but large compared to Earth's 2 degree limit. The slope of the solar phase curve hints at properties such as surface roughness. Using PSF photometry that takes into account the ever-increasing sizes of Pluto and Charon as seen from New Horizons, as well as surface features discovered at closest approach, we present rotational light curves and solar phase curves of Pluto and Charon. We will connect these observations to previous measurements of the system from Earth.
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user’s own voice as too loud, “boomy,” or “hollow.” This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches.
Winkler, Alexandra; Latzel, Matthias; Holube, Inga
2016-02-15
One of the main issues in hearing-aid fittings is the abnormal perception of the user's own voice as too loud, "boomy," or "hollow." This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings.
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
NASA Astrophysics Data System (ADS)
Zheng, WeiKang; Filippenko, Alexei V.
2017-03-01
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.
Modified Sediment Rating Curve Approach for Supply-dependent Conditions
NASA Astrophysics Data System (ADS)
Wright, S. A.; Topping, D. J.; Rubin, D. M.; Melis, T. S.
2007-12-01
Reliable predictions of sediment transport and river morphology in response to driving forces, such as anthropogenic influences, are necessary for river engineering and management. Because engineering and management questions span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from sediment transport rating curves to complex three-dimensional, multiple grain-size morphodynamic models. Sediment transport rating curves assume a singular relation between sediment concentration and flow. This approach is attractive for evaluating long-term sediment budgets resulting from changes in flow regimes because it is simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (sediment concentration and flow). However, the assumption of a singular relation between sediment concentration and flow contains the following implicit assumptions: 1) that sediment transport is in equilibrium with sediment supply such that the grain-size distribution of the bed sediment is not changing, and 2) that the relation between flow and bed shear stress is constant. These assumptions present limitations that have led to the development of more complex numerical models of flow and morphodynamics. These models rely on momentum and mass conservation for water and sediment and thus have general applicability; however, this comes at a cost in terms of computations as well as the amount of data required for model set-up and testing. We present a hybrid approach that combines aspects of the standard sediment rating curve method and the more complex morphodynamic models. Our approach employs the idea of a shifting rating curve, whereby the relation between sediment concentration and flow changes as a function of the sediment budget in the reach. We have applied this alternative approach to the Colorado River below Glen Canyon Dam. This reach is
A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.
Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki
2013-01-01
In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.
Burchardt, Malte; Träuble, Markus; Wittstock, Gunther
2009-06-15
The formalism for simulating scanning electrochemical microscopy (SECM) experiments by boundary element methods in three space coordinates has been extended to allow consideration of nonlinear boundary conditions. This is achieved by iteratively refining the boundary conditions that are encoded in a boundary condition matrix. As an example, the simulations are compared to experimental approach curves in the SECM feedback mode toward samples modified with glucose oxidase (GOx). The GOx layer was prepared by the layer-by-layer assembly of polyelectrolytes using glucose oxidase as one of the polyelectrolytes. The comparison of the simulated and experimental curves showed that under a wide range of experimentally accessible conditions approximations of the kinetics at the sample by first order models yield misleading results. The approach curves differ also qualitatively from curves calculated with first order models. As a consequence, this may lead to severe deviations when such curves are fitted to first order kinetic models. The use of linear approximations to describe the enzymatic reaction in SECM feedback experiments is justified only if the ratio of the mediator and Michaelis-Menten constant is equal to or smaller than 0.1 (deviation less than 10%).
ERIC Educational Resources Information Center
Winsberg, Suzanne; And Others
In most item response theory models a particular mathematical form is assumed for all item characteristic curves, e.g., a logistic function. It could be desirable, however, to estimate the shape of the item characteristic curves without prior restrictive assumptions about its mathematical form. We have developed a practical method of estimating…
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications. PMID:28319131
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro
2014-01-01
ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Mugge, P. H.
1974-01-01
Simplified curve fits for the thermodynamic properties of equilibrium air were devised for use in either the time-dependent or shock-capturing computational methods. For the time-dependent method, curve fits were developed for p = p(e, rho), a = a(e, rho), and T = T(e, rho). For the shock-capturing method, curve fits were developed for h = h(p, rho) and T = T(p, rho). The ranges of validity for these curves fits were for temperatures up to 25,000 K and densities from 10 to the minus 7th power to 10 to the 3d power amagats. These approximate curve fits are considered particularly useful when employed on advanced computers such as the Burroughs ILLIAC 4 or the CDC STAR.
Reliability of temperature determination from curve-fitting in multi-wavelength pyrometery
Ni, P. A.; More, R. M.; Bieniosek, F. M.
2013-08-04
Abstract This paper examines the reliability of a widely used method for temperature determination by multi-wavelength pyrometry. In recent WDM experiments with ion-beam heated metal foils, we found that the statistical quality of the fit to the measured data is not necessarily a measure of the accuracy of the inferred temperature. We found a specific example where a second-best fit leads to a more realistic temperature value. The physics issue is the wavelength-dependent emissivity of the hot surface. We discuss improvements of the multi-frequency pyrometry technique, which will give a more reliable determination of the temperature from emission data.
Clarke, F H; Cahoon, N M
1987-08-01
A convenient procedure has been developed for the determination of partition and distribution coefficients. The method involves the potentiometric titration of the compound, first in water and then in a rapidly stirred mixture of water and octanol. An automatic titrator is used, and the data is collected and analyzed by curve fitting on a microcomputer with 64 K of memory. The method is rapid and accurate for compounds with pKa values between 4 and 10. Partition coefficients can be measured for monoprotic and diprotic acids and bases. The partition coefficients of the neutral compound and its ion(s) can be determined by varying the ratio of octanol to water. Distribution coefficients calculated over a wide range of pH values are presented graphically as "distribution profiles". It is shown that subtraction of the titration curve of solvent alone from that of the compound in the solvent offers advantages for pKa determination by curve fitting for compounds of low aqueous solubility.
Montgomery, M. H.; Winget, D. E.; Provencal, J. L.; Thompson, S. E.; Kanaan, A.; Mukadam, Anjum S.; Dalessio, J.; Shipman, H. L.; Kepler, S. O.; Koester, D.
2010-06-10
Convective driving, the mechanism originally proposed by Brickhill for pulsating white dwarf stars, has gained general acceptance as the generic linear instability mechanism in DAV and DBV white dwarfs. This physical mechanism naturally leads to a nonlinear formulation, reproducing the observed light curves of many pulsating white dwarfs. This numerical model can also provide information on the average depth of a star's convection zone and the inclination angle of its pulsation axis. In this paper, we give two sets of results of nonlinear light curve fits to data on the DBV GD 358. Our first fit is based on data gathered in 2006 by the Whole Earth Telescope; this data set was multiperiodic containing at least 12 individual modes. Our second fit utilizes data obtained in 1996, when GD 358 underwent a dramatic change in excited frequencies accompanied by a rapid increase in fractional amplitude; during this event it was essentially monoperiodic. We argue that GD 358's convection zone was much thinner in 1996 than in 2006, and we interpret this as a result of a short-lived increase in its surface temperature. In addition, we find strong evidence of oblique pulsation using two sets of evenly split triplets in the 2006 data. This marks the first time that oblique pulsation has been identified in a variable white dwarf star.
Ferreira, Abílio G T; Henrique, Douglas S; Vieira, Ricardo A M; Maeda, Emilyn M; Valotto, Altair A
2015-03-01
The objective of this study was to evaluate four mathematical models with regards to their fit to lactation curves of Holstein cows from herds raised in the southwestern region of the state of Parana, Brazil. Initially, 42,281 milk production records from 2005 to 2011 were obtained from "Associação Paranaense de Criadores de Bovinos da Raça Holandesa (APCBRH)". Data lacking dates of drying and total milk production at 305 days of lactation were excluded, resulting in a remaining 15,142 records corresponding to 2,441 Holstein cows. Data were sorted according to the parity order (ranging from one to six), and within each parity order the animals were divided into quartiles (Q25%, Q50%, Q75% and Q100%) corresponding to 305-day lactation yield. Within each parity order, for each quartile, four mathematical models were adjusted, two of which were predominantly empirical (Brody and Wood) whereas the other two presented more mechanistic characteristics (models Dijkstra and Pollott). The quality of fit was evaluated by the corrected Akaike information criterion. The Wood model showed the best fit in almost all evaluated situations and, therefore, may be considered as the most suitable model to describe, at least empirically, the lactation curves of Holstein cows raised in Southwestern Parana.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Binary 3D image interpolation algorithm based global information and adaptive curves fitting
NASA Astrophysics Data System (ADS)
Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng
2013-08-01
Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.
Learning Curves: Making Quality Online Health Information Available at a Fitness Center.
Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A
2012-01-01
Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
The Carnegie Supernova Project: Light-curve Fitting with SNooPy
NASA Astrophysics Data System (ADS)
Burns, Christopher R.; Stritzinger, Maximilian; Phillips, M. M.; Kattner, ShiAnne; Persson, S. E.; Madore, Barry F.; Freedman, Wendy L.; Boldt, Luis; Campillay, Abdo; Contreras, Carlos; Folatelli, Gaston; Gonzalez, Sergio; Krzeminski, Wojtek; Morrell, Nidia; Salgado, Francisco; Suntzeff, Nicholas B.
2011-01-01
In providing an independent measure of the expansion history of the universe, the Carnegie Supernova Project (CSP) has observed 71 high-z Type Ia supernovae (SNe Ia) in the near-infrared bands Y and J. These can be used to construct rest-frame i-band light curves which, when compared to a low-z sample, yield distance moduli that are less sensitive to extinction and/or decline-rate corrections than in the optical. However, working with NIR observed and i-band rest-frame photometry presents unique challenges and has necessitated the development of a new set of observational tools in order to reduce and analyze both the low-z and high-z CSP sample. We present in this paper the methods used to generate uBVgriYJH light-curve templates based on a sample of 24 high-quality low-z CSP SNe. We also present two methods for determining the distances to the hosts of SN Ia events. A larger sample of 30 low-z SNe Ia in the Hubble flow is used to calibrate these methods. We then apply the method and derive distances to seven galaxies that are so nearby that their motions are not dominated by the Hubble flow.
A robust polynomial fitting approach for contact angle measurements.
Atefi, Ehsan; Mann, J Adin; Tavana, Hossein
2013-05-14
Polynomial fitting to drop profile offers an alternative to well-established drop shape techniques for contact angle measurements from sessile drops without a need for liquid physical properties. Here, we evaluate the accuracy of contact angles resulting from fitting polynomials of various orders to drop profiles in a Cartesian coordinate system, over a wide range of contact angles. We develop a differentiator mask to automatically find a range of required number of pixels from a drop profile over which a stable contact angle is obtained. The polynomial order that results in the longest stable regime and returns the lowest standard error and the highest correlation coefficient is selected to determine drop contact angles. We find that, unlike previous reports, a single polynomial order cannot be used to accurately estimate a wide range of contact angles and that a larger order polynomial is needed for drops with larger contact angles. Our method returns contact angles with an accuracy of <0.4° for solid-liquid systems with θ < ~60°. This compares well with the axisymmetric drop shape analysis-profile (ADSA-P) methodology results. Above about 60°, we observe significant deviations from ADSA-P results, most likely because a polynomial cannot trace the profile of drops with close-to-vertical and vertical segments. To overcome this limitation, we implement a new polynomial fitting scheme by transforming drop profiles into polar coordinate system. This eliminates the well-known problem with high curvature drops and enables estimating contact angles in a wide range with a fourth-order polynomial. We show that this approach returns dynamic contact angles with less than 0.7° error as compared to ADSA-P, for the solid-liquid systems tested. This new approach is a powerful alternative to drop shape techniques for estimating contact angles of drops regardless of drop symmetry and without a need for liquid properties.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C
1996-01-01
Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
NASA Astrophysics Data System (ADS)
Navascues, M. A.; Sebastian, M. V.
Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.
Lin, Shan-Yang; Hsu, Cheng-Hung; Sheu, Ming-Thau
2010-11-02
The formation steps of inclusion complex caused by co-grinding loratadine (LOR) and hydroxypropyl-beta-cyclodextrin (HP-beta-CD) with a molar ratio of 1:1 or 1:2 were quantitatively investigated by Fourier transform infrared (FTIR) spectroscopy with curve-fitting analysis and differential scanning calorimetry (DSC). The phase solubility study and the co-evaporated solid products of the mixture of LOR and HP-beta-CD were also examined. The result indicates that the aqueous solubility of LOR was linearly increased with the increase of HP-beta-CD concentrations, in which the phase solubility diagram was classified as A(L) type. The higher apparent stability constant (2.22 x 10(4)M(-1)) reveals that the inclusion complex formed between LOR and HP-beta-CD was quite stable. The endothermic peak at 134.6 degrees C for the melting point of LOR gradually disappeared from DSC curves of LOR/HP-beta-CD coground mixtures by increasing the cogrinding time, as the disappearance of the co-evaporated solid products. The disappearance of this endothermic peak from LOR/HP-beta-CD coground mixture or the co-evaporated solid products was due to the inclusion complex formation between LOR and HP-beta-CD after cogrinding process or evaporation. Moreover, IR peaks at 1676 cm(-1) down-shifted from 1703 cm(-1) (CO stretching) and at 1235 cm(-1) upper-shifted from 1227 cm(-1) (C-O stretching) related to LOR in the inclusion complex were observed with the increase of cogrinding time, but the peak at 1646 cm(-1) due to O-H stretching of HP-beta-CD was shifted to 1640 cm(-1). The IR spectrum of 15 min-coground mixture was the same as the IR spectrum of the co-evaporated solid product, strongly indicating that the grinding process could cause the inclusion complex formation between LOR and HP-beta-CD. Three components (1700, 1676, and 1640 cm(-1)) and their compositions were certainly obtained in the 1740-1600 cm(-1) region of FTIR spectra for the LOR/HP-beta-CD coground mixture and the co
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
U-Shaped Curves in Development: A PDP Approach
ERIC Educational Resources Information Center
Rogers, Timothy T.; Rakison, David H.; McClelland, James L.
2004-01-01
As the articles in this issue attest, U-shaped curves in development have stimulated a wide spectrum of research across disparate task domains and age groups and have provoked a variety of ideas about their origins and theoretical significance. In the authors' view, the ubiquity of the general pattern suggests that U-shaped curves can arise from…
Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses
Liu, De Li; An, Min; Johnson, Ian R.; Lovett, John V.
2003-01-01
Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to α-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf. PMID:19330111
Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses.
Liu, De Li; An, Min; Johnson, Ian R; Lovett, John V
2003-01-01
Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to alpha-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf.
Approaches to measure the fitness of Burkholderia cepacia complex isolates.
Pope, C F; Gillespie, S H; Moore, J E; McHugh, T D
2010-06-01
Members of the Burkholderia cepacia complex (Bcc) are highly resistant to many antibacterial agents and infection can be difficult to eradicate. A coordinated approach has been used to measure the fitness of Bcc bacteria isolated from cystic fibrosis (CF) patients with chronic Bcc infection using methods relevant to Bcc growth and survival conditions. Significant differences in growth rate were observed among isolates; slower growth rates were associated with isolates that exhibited higher MICs and were resistant to more antimicrobial classes. The nucleotide sequences of the quinolone resistance-determining region of gyrA in the isolates were determined and the ciprofloxacin MIC correlated with amino acid substitutions at codons 83 and 87. Biologically relevant methods for fitness measurement were developed and could be applied to investigate larger numbers of clinical isolates. These methods were determination of planktonic growth rate, biofilm formation, survival in water and survival during drying. We also describe a method to determine mutation rate in Bcc bacteria. Unlike in Pseudomonas aeruginosa where hypermutability has been detected in strains isolated from CF patients, we were unable to demonstrate hypermutability in this panel of Burkholderia cenocepacia and Burkholderia multivorans isolates.
NASA Astrophysics Data System (ADS)
Thomas, Christian L.
2006-06-01
Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
A Comprehensive Approach for Assessing Person Fit with Test-Retest Data
ERIC Educational Resources Information Center
Ferrando, Pere J.
2014-01-01
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.
ERIC Educational Resources Information Center
Buchanan, Trey
Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…
NASA Astrophysics Data System (ADS)
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
Beyond the Ubiquitous Relapse Curve: A Data-Informed Approach
Zywiak, William H.; Kenna, George A.; Westerberg, Verner S.
2011-01-01
Relapse to alcohol and other substances has generally been described by curves that resemble one another. However, these curves have been generated from the time to first use after a period of abstinence without regard to the movement of individuals into and out of drug use. Instead of measuring continuous abstinence, we considered post-treatment functioning as a more complicated phenomenon, describing how people move in and out of drinking states on a monthly basis over the course of a year. When we looked at time to first drink we observed the ubiquitous relapse curve. When we classified clients (N = 550) according to drinking state however, they frequently moved from one state to another with both abstinent and very heavy drinking states as being rather stable, and light or moderate drinking and heavy drinking being unstable. We found that clients with a family history of alcoholism were less likely to experience these unstable states. When we examined the distribution of cases crossed by the number of times clients switched states we found that a power function explained 83% of that relationship. Some of the remainder of the variance seems to be explained by the stable states of very heavy drinking and abstinence acting as attractors. PMID:21556282
NASA Astrophysics Data System (ADS)
Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.
2017-04-01
We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.
Fitting of m*/m with Divergence Curve for He3 Fluid Monolayer using Hole-driven Mott Transition
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tak
2012-02-01
The electron-electron interaction for strongly correlated systems plays an important role in formation of an energy gap in solid. The breakdown of the energy gap is called the Mott metal-insulator transition (MIT) which is different from the Peierls MIT induced by breakdown of electron-phonon interaction generated by change of a periodic lattice. It has been known that the correlated systems are inhomogeneous. In particular, He3 fluid monolayer [1] and La1-xSrxTiO3 [2] are representative strongly correlated systems. Their doping dependence of the effective mass of carrier in metal, m*/m, indicating the magnitude of correlation (Coulomb interaction) between electrons has a divergence behavior. However, the fitting remains unfitted to be explained by a Mott-transition theory with divergence. In the case of He3 regarded as the Fermi system with one positive charge (2 electrons + 3 protons), the interaction between He3 atoms is regarded as the correlation in strongly correlated system. In this presentation, we introduce a Hole-driven MIT with a divergence near the Mott transition [3] and fit the m*/m curve in He3 [1] and La1-xSrxTiO3 systems with the Hole-driven MIT with m*/m=1/(1-ρ^4) where ρ is band filling. Moreover, it is shown that the physical meaning of the effective mass with the divergence is percolation in which m*/m increases with increasing doping concentration, and that the magnitude of m*/m is constant.[4pt] [1] Phys. Rev. Lett. 90, 115301 (2003).[0pt] [2] Phys. Rev. Lett. 70, 2126 (1993).[0pt] [3] Physica C 341-348, 259 (2000); Physica C 460-462, 1076 (2007).
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test.
MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.
ERIC Educational Resources Information Center
Arabie, Phipps
1980-01-01
A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)
Buttchereit, N; Stamer, E; Junge, W; Thaller, G
2010-04-01
Selection for milk yield increases the metabolic load of dairy cows. The fat:protein ratio of milk (FPR) could serve as a measure of the energy balance status and might be used as a selection criterion to improve metabolic stability. The fit of different fixed and random regression models describing FPR and daily energy balance was tested to establish appropriate models for further genetic analyses. In addition, the relationship between both traits was evaluated for the best fitting model. Data were collected on a dairy research farm running a bull dam performance test. Energy balance was calculated using information on milk yield, feed intake per day, and live weight. Weekly FPR measurements were available. Three data sets were created containing records of 577 primiparous cows with observations from lactation d 11 to 180 as well as records of 613 primiparous cows and 96 multiparous cows with observations from lactation d 11 to 305. Five well-established parametric functions of days in milk (Ali and Schaeffer, Guo and Swalve, Wilmink, Legendre polynomials of third and fourth degree) were chosen for modeling the lactation curves. Evaluation of goodness of fit was based on the corrected Akaike information criterion, the Bayesian information criterion, correlation between the real observation and the estimated value, and on inspection of the residuals plotted against days in milk. The best model was chosen for estimation of correlations between both traits at different lactation stages. Random regression models were superior compared with the fixed regression models. In general, the Ali and Schaeffer function appeared most suitable for modeling both the fixed and the random regression part of the mixed model. The FPR is greatest in the initial lactation period when energy deficit is most pronounced. Energy balance stabilizes at the same point as the decrease in FPR stops. The inverted patterns indicate a causal relationship between the 2 traits. A common pattern was
A new approach to the analysis of Mira light curves
NASA Technical Reports Server (NTRS)
Mennessier, M. O.; Barthes, D.; Mattei, J. A.
1990-01-01
Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules.
Larion, Mioara; Miller, Brian G
2010-10-19
Human pancreatic glucokinase is a monomeric enzyme that displays kinetic cooperativity, a feature that facilitates enzyme-mediated regulation of blood glucose levels in the body. Two theoretical models have been proposed to describe the non-Michaelis-Menten behavior of human glucokinase. The mnemonic mechanism postulates the existence of one thermodynamically favored enzyme conformation in the absence of glucose, whereas the ligand-induced slow transition model (LIST) requires a preexisting equilibrium between two enzyme species that interconvert with a rate constant slower than turnover. To investigate whether either of these mechanisms is sufficient to describe glucokinase cooperativity, a transient-state kinetic analysis of glucose binding to the enzyme was undertaken. A complex, time-dependent change in enzyme intrinsic fluorescence was observed upon exposure to glucose, which is best described by an analytical solution comprised of the sum of four exponential terms. Transient-state glucose binding experiments conducted in the presence of increasing glycerol concentrations demonstrate that three of the observed rate constants decrease with increasing viscosity. Global fit analyses of experimental glucose binding curves are consistent with a kinetic model that is an extension of the LIST mechanism with a total of four glucose-bound binary complexes. The kinetic model presented herein suggests that glucokinase samples multiple conformations in the absence of ligand and that this conformational heterogeneity persists even after the enzyme associates with glucose.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
A computational approach to the twin paradox in curved spacetime
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng
2016-09-01
Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
NASA Astrophysics Data System (ADS)
Włosińska, M.; Niedzielski, T.; Priede, I. G.; Migoń, P.
2012-04-01
The poster reports ongoing investigations into hypsometric curve modelling and its implications for sea level change. Numerous large-scale geodynamic phenomena, including global tectonics and the related sea level changes, are well described by a hypsometric curve that quantifies how the area of sea floor varies along with depth. Although the notion of hypsometric curve is rather simple, it is difficult to provide a reasonable theoretical model that fits an empirical curve. An analytical equation for a hypsometric curve is well known, but its goodness-of-fit to an empirical one is far from perfect. Such a limited accuracy may result from either not entirely adequate theoretical assumptions and concepts of a theoretical hypsometric curve or rather poorly modelled global bathymetry. Recent progress in obtaining accurate data on sea floor topography is due to subsea surveying and remote sensing. There are bathymetric datasets, including Global Bathymetric Charts of the Oceans (GEBCO), that provide a global framework for hypsometric curve computation. The recent GEBCO bathymetry - a gridded dataset that consists a sea floor topography raster revealing a global coverage with a spatial resolution of 30 arc-seconds - can be analysed to verify a depth-area relationship and to re-evaluate classical models for sea level change in geological time. Processing of the geospatial data is feasible on the basis of modern powerful tools provided by Geographic Information System (GIS) and automated with Python, the programming language that allows the user to utilise the GIS geoprocessor.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Integrated healthcare networks' performance: a growth curve modeling approach.
Wan, Thomas T H; Wang, Bill B L
2003-05-01
This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.
Guo, Lianping; Tian, Shulin; Jiang, Jun
2015-03-01
This paper proposes an algorithm to estimate the channel mismatches in time-interleaved analog-to-digital converter (TIADC) based on fractional delay (FD) and sine curve fitting. Choose one channel as the reference channel and apply FD to the output samples of reference channel to obtain the ideal samples of non-reference channels with no mismatches. Based on least square method, the sine curves are adopted to fit the ideal and the actual samples of non-reference channels, and then the mismatch parameters can be estimated by comparing the ideal sine curves and the actual ones. The principle of this algorithm is simple and easily understood. Moreover, its implementation needs no extra circuits, lowering the hardware cost. Simulation results show that the estimation accuracy of this algorithm can be controlled within 2%. Finally, the practicability of this algorithm is verified by the measurement results of channel mismatch errors of a two-channel TIADC prototype.
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
NASA Technical Reports Server (NTRS)
Knox, Charles E.
1993-01-01
A piloted simulation study was conducted to examine the requirements for using electromechanical flight instrumentation to provide situation information and flight guidance for manually controlled flight along curved precision approach paths to a landing. Six pilots were used as test subjects. The data from these tests indicated that flight director guidance is required for the manually controlled flight of a jet transport airplane on curved approach paths. Acceptable path tracking performance was attained with each of the three situation information algorithms tested. Approach paths with both multiple sequential turns and short final path segments were evaluated. Pilot comments indicated that all the approach paths tested could be used in normal airline operations.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Physical fitness: An operator's approach to coping with shift work
Hanks, D.H.
1989-01-01
There is a strong correlation between a shift worker's ability to remain alert and the physical fitness of the individual. Alertness is a key element of a nuclear plant operator's ability to effectively monitor and control plant status. The constant changes in one's metabolism caused by the rotation of work (and sleep) hours can be devastating to his or her health. Many workers with longevity in the field, however, have found it beneficial to maintain some sort of workout or sport activity, feeling that this activity offsets the physical burden of backshift. The author's experience working shifts for 10 years and his reported increase in alertness through exercise and diet manipulation are described in this paper.
A Global Fitting Approach For Doppler Broadening Thermometry
NASA Astrophysics Data System (ADS)
Amodio, Pasquale; Moretti, Luigi; De Vizia, Maria Domenica; Gianfrani, Livio
2014-06-01
Very recently, a spectroscopic determination of the Boltzmann constant, kB, has been performed at the Second University of Naples by means of a rather sophisticated implementation of Doppler Broadening Thermometry (DBT)1. Performed on a 18O-enriched water sample, at a wavelength of 1.39 µm, the experiment has provided a value for kB with a combined uncertainty of 24 parts over 106, which is the best result obtained so far, by using an optical method. In the spectral analysis procedure, the partially correlated speed-dependent hard-collision (pC-SDHC) model was adopted. The uncertainty budget has clearly revealed that the major contributions come from the statistical uncertainty (type A) and from the uncertainty associated to the line-shape model (type B)2. In the present work, we present the first results of a theoretical and numerical work aimed at reducing these uncertainty components. It is well known that molecular line shapes exhibit clear deviations from the time honoured Voigt profile. Even in the case of a well isolated spectral line, under the influence of binary collisions, in the Doppler regime, the shape can be quite complicated by the joint occurrence of velocity-change collisions and speed-dependent effects. The partially correlated speed-dependent Keilson-Storer profile (pC-SDKS) has been recently proposed as a very realistic model, capable of reproducing very accurately the absorption spectra for self-colliding water molecules, in the near infrared3. Unfortunately, the model is so complex that it cannot be implemented into a fitting routine for the analysis of experimental spectra. Therefore, we have developed a MATLAB code to simulate a variety of H218O spectra in thermodynamic conditions identical to the one of our DBT experiment, using the pC-SDKS model. The numerical calculations to determine such a profile have a very large computational cost, resulting from a very sophisticated iterative procedure. Hence, the numerically simulated spectra
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
A new approach for magnetic curves in 3D Riemannian manifolds
Bozkurt, Zehra Gök, Ismail Yaylı, Yusuf Ekmekci, F. Nejat
2014-05-15
A magnetic field is defined by the property that its divergence is zero in a three-dimensional oriented Riemannian manifold. Each magnetic field generates a magnetic flow whose trajectories are curves called as magnetic curves. In this paper, we give a new variational approach to study the magnetic flow associated with the Killing magnetic field in a three-dimensional oriented Riemann manifold, (M{sup 3}, g). And then, we investigate the trajectories of the magnetic fields called as N-magnetic and B-magnetic curves.
Estimating yield curve the Svensson extended model using L-BFGS-B method approach
NASA Astrophysics Data System (ADS)
Muslim, Rosadi, Dedi; Gunardi, Abdurakhman
2015-02-01
Yield curve is curves that describe the magnitude of the yield against maturity. To describe this curve, we use the Svensson model. One extension of this model is Rezende-Ferreira. Expansion undertaken by Rezende-Ferreira has weaknesses that there are several parameters have the same value. These values form Nelson-Siegel model. In this paper, we propose expansion of Svensson model. These models are non-linear model, so it is more difficult to estimate. To overcome this problem, we propose Nonlinear Least Square by L-BFGS-B method approach.
ERIC Educational Resources Information Center
Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex
2009-01-01
The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the…
ERIC Educational Resources Information Center
Rousseau, Ronald
1994-01-01
Discussion of informetric distributions shows that generalized Leimkuhler functions give proper fits to a large variety of Bradford curves, including those exhibiting a Groos droop or a rising tail. The Kolmogorov-Smirnov test is used to test goodness of fit, and least-square fits are compared with Egghe's method. (Contains 53 references.) (LRW)
Cobaleda, C; García-Sastre, A; Villar, E
1994-01-01
The kinetics of fusion between Newcastle disease virus and erythrocyte ghosts has been investigated with the octadecyl Rhodamine B chloride assay [Hoekstra, De Boer, Klappe, and Wilschut (1984) Biochemistry 23, 5675-5681], and the data from the dequenching curves were fitted by non-linear regression to currently used kinetic models. We used direct computer-assisted fitting of the dequenching curves to the mathematical equations. Discrimination between models was performed by statistical analysis of different fits. The experimental data fit the exponential model previously published [Nir, Klappe, and Hoekstra (1986) Biochemistry 25, 2155-2161] but we describe for the first time that the best fit was achieved for the sum of two exponential terms: A1[1-exp(-k1t)]+A2[1-exp(-k2t)]. The first exponential term represents a fast reaction and the second a slow dequenching reaction. These findings reveal the existence of two independent, but simultaneous, processes during the fusion assay. In order to challenge the model and to understand the meaning of both equation, fusion experiments were carried out under different conditions well known to affect viral fusion (changes in pH, temperature and ghost concentration, and the presence of disulphide-reducing agents or inhibitors of viral neuraminidase activity), and the same computer fitting scheme was followed. The first exponential equation represents the viral protein-dependent fusion process itself, because it is affected by the assay conditions. The second exponential equation accounts for a nonspecific reaction, because it is completely independent of the assay conditions and hence of the viral proteins. An interpretation of this second process is discussed in terms of probe transfer between vesicles. PMID:8002938
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference.
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Effects of curved approach paths and advanced displays on pilot scan patterns
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Mixon, R. W.
1981-01-01
The effect on pilot scan behavior of both advanced cockpit and advanced manuevers was assessed. A series of straight-in and curved landing approaches were performed in the Terminal Configured Vehicle (TCV) simulator. Two comparisons of pilot scan behavior were made: (1) pilot scan behavior for straight-in approaches compared with scan behavior previously obtained in a conventionally equipped simulator, and (2) pilot scan behavior for straight-in approaches compared with scan behavior for curved approaches. The results indicate very similar scanning patterns during the straight-in approaches in the conventional and advanced cockpits. However, for the curved approaches pilot attention shifted to the electronic horizontal situation display (moving map), and a new eye scan path appeared between the map and the airspeed indicator. The very high dwell percentage and dwell times on the electronic displays in the TCV simulator during the final portions of the approaches suggest that the electronic attitude direction indicator was well designed for these landing approaches.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-01-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about –0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%. PMID:23724368
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations.
NASA Astrophysics Data System (ADS)
Bhattacharya, Kolahal; Banerjee, Sudeshna; Mondal, Naba K.
2016-07-01
In the context of track fitting problems by a Kalman filter, the appropriate functional forms of the elements of the random process noise matrix are derived for tracking through thick layers of dense materials and magnetic field. This work complements the form of the process noise matrix obtained by Mankel [1].
Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.
Ristanović, D; Ristanović, D; Malesević, J; Milutinović, B
1983-01-01
Plasma kinetics of bromsulphalein (BSP) after a single injection into the bloodstream of the rat with total obstruction of the common bile duct was examined. The concentrations of BSP were determined colorimetrically. A monoexponential plus a general first-degree function in time with four unknown parameters was fitted. Two programs were developed for the Texas Instruments 59 programmable calculator to estimate the values of all the parameters by an iteration procedure. The programs executed at about twice normal speed.
Liao, Fei; Zhu, Xiao-Yun; Wang, Yong-Mei; Zuo, Yu-Ping
2005-01-31
The estimation of enzyme kinetic parameters by nonlinear fitting reaction curve to the integrated Michaelis-Menten rate equation ln(S(0)/S)+(S(0)-S)/K(m)=(V(m)/K(m))xt was investigated and compared to that by fitting to (S(0)-S)/t=V(m)-K(m)x[ln(S(0)/S)/t] (Atkins GL, Nimmo IA. The reliability of Michaelis-Menten constants and maximum velocities estimated by using the integrated Michaelis-Menten equation. Biochem J 1973;135:779-84) with uricase as the model. Uricase reaction curve was simulated with random absorbance error of 0.001 at 0.075 mmol/l uric acid. Experimental reaction curve was monitored by absorbance at 293 nm. For both CV and deviation <20% by simulation, K(m) from 5 to 100 micromol/l was estimated with Eq. (1) while K(m) from 5 to 50 micromol/l was estimated with Eq. (2). The background absorbance and the error in the lag time of steady-state reaction resulted in negative K(m) with Eq. (2), but did not affect K(m) estimated with Eq. (1). Both equations gave better estimation of V(m). The computation time and the goodness of fit with Eq. (1) were 40-fold greater than those with Eq. (2). By experimentation, Eq. (1) yielded K(m) consistent with the Lineweaver-Burk plot analysis, but Eq. (2) gave many negative parameters. Apparent K(m) by Eq. (1) linearly increased, while V(m) were constant, vs. xanthine concentrations, and the inhibition constant was consistent with the Lineweaver-Burk plot analysis. These results suggested that the integrated rate equation that uses the predictor variable of reaction time was reliable for the estimation of enzyme kinetic parameters and applicable for the characterization of enzyme inhibitors.
NASA Technical Reports Server (NTRS)
Degnan, J. J.; Walker, H. E.; Mcelroy, J. H.; Mcavoy, N.; Zagwodski, T.
1972-01-01
A least squares curve-fitting algorithm is derived which allows the simultaneous estimation of the small signal gain and the saturation intensity from an arbitrary number of data points relating power output to the incidence angle of an internal coupling plate. The method is used to study the dependence of the two parameters on tube pressure and discharge current in a waveguide CO2 laser having a 2 mm diameter capillary. It is found that, at pressures greater than 28 torr, rising CO2 temperature degrades the small signal gain at current levels as low as three milliamperes.
Hill, K.
1988-06-01
The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.
Lebon, M; Reiche, I; Fröhlich, F; Bahain, J-J; Falguères, C
2008-12-01
Derivative Fourier transform infrared (FTIR) spectroscopy and curve fitting have been used to investigate the effect of a thermal treatment on the nu(1)nu(3) PO(4) domain of modern bones. This method was efficient for identifying mineral matter modifications during heating. In particular, the 961, 1022, 1061, and 1092 cm(-1) components show an important wavenumber shift between 120 and 700 degrees C, attributed to the decrease of the distortions induced by the removal of CO(3)(2-) and HPO(4)(2-) ions from the mineral lattice. The so-called 1030/1020 ratio was used to evaluate crystalline growth above 600 degrees C. The same analytical protocol was applied on Magdalenian fossil bones from the Bize-Tournal Cave (France). Although the band positions seem to have been affected by diagenetic processes, a wavenumber index--established by summing of the 961, 1022, and 1061 cm(-1) peak positions--discriminated heated bones better than the 1030/1020 ratio, and the splitting factor frequently used to identify burnt bones in an archaeological context. This study suggest that the combination of derivative and curve-fitting analysis may afford a sensitive evaluation of the maximum temperature reached, and thus contribute to the fossil-derived knowledge of human activities related to the use of fire.
Devereux, Mike; Gresh, Nohad; Piquemal, Jean-Philip; Meuwly, Markus
2014-08-05
A supervised, semiautomated approach to force field parameter fitting is described and applied to the SIBFA polarizable force field. The I-NoLLS interactive, nonlinear least squares fitting program is used as an engine for parameter refinement while keeping parameter values within a physical range. Interactive fitting is shown to avoid many of the stability problems that frequently afflict highly correlated, nonlinear fitting problems occurring in force field parametrizations. The method is used to obtain parameters for the H2O, formamide, and imidazole molecular fragments and their complexes with the Mg(2+) cation. Reference data obtained from ab initio calculations using an auc-cc-pVTZ basis set exploit advances in modern computer hardware to provide a more accurate parametrization of SIBFA than has previously been available.
Understanding the distribution of fitness effects of mutations by a biophysical-organismal approach
NASA Astrophysics Data System (ADS)
Bershtein, Shimon
2011-03-01
The distribution of fitness effects of mutations is central to many questions in evolutionary biology. However, it remains poorly understood, primarily due to the fact that a fundamental connection that exists between the fitness of organisms and molecular properties of proteins encoded by their genomes is largely overlooked by traditional research approaches. Past efforts to breach this gap followed the ``evolution first'' paradigm, whereby populations were subjected to selection under certain conditions, and mutations which emerged in adapted populations were analyzed using genomic approaches. The results obtained in the framework of this approach, while often useful, are not easily interpretable because mutations get fixed due to a convolution of multiple causes. We have undertaken a conceptually opposite strategy: Mutations with known biophysical and biochemical effects on E. coli's essential proteins (based on computational analysis and in vitro measurements) were introduced into the organism's chromosome and the resulted fitness effects were monitored. Studying the distribution of fitness effects of such fully controlled replacements revealed a very complex fitness landscape, where impact of the microscopic properties of the mutated proteins (folding, stability, and function) is modulated on a macroscopic, whole genome level. Furthermore, the magnitude of the cellular response to the introduced mutations seems to depend on the thermodynamic status of the mutant.
Duval, M; Guilarte Moreno, V; Grün, R
2013-12-01
This work deals with the specific studies of three main sources of uncertainty in electron spin resonance (ESR) dosimetry/dating of fossil tooth enamel: (1) the precision of the ESR measurements, (2) the long-term signal fading the selection of the fitting function. They show a different influence on the equivalent dose (D(E)) estimates. Repeated ESR measurements were performed on 17 different samples: results show a mean coefficient of variation of the ESR intensities of 1.20 ± 0.23 %, inducing a mean relative variability of 3.05 ± 2.29 % in the D(E) values. ESR signal fading over 5 y was also observed: its magnitude seems to be quite sample dependant but is nevertheless especially important for the most irradiated aliquots. This fading has an apparent random effect on the D(E) estimates. Finally, the authors provide new insights and recommendations about the fitting of ESR dose-response curves of fossil enamel with a double saturating exponential (DSE) function. The potential of a new variation of the DSE was also explored. Results of this study also show that the choice of the fitting function is of major importance, maybe more than the other sources previously mentioned, in order to get accurate final D(E) values.
Konzen, Kevin; Brey, Richard
2012-05-01
²²²Rn (radon) and ²²⁰Rn (thoron) progeny are known to interfere with determining the presence of long-lived transuranic radionuclides, such as plutonium and americium, and require from several hours up to several days for conclusive results. Methods are proposed that should expedite the analysis of air samples for determining the amount of transuranic radionuclides present using low-resolution alpha spectroscopy systems available from typical alpha continuous air monitors (CAMs) with multi-channel analyzer (MCA) capabilities. An alpha spectra simulation program was developed in Microsoft Excel visual basic that employed the use of Monte Carlo numerical methods and serial-decay differential equations that resembled actual spectra. Transuranic radionuclides were able to be quantified with statistical certainty by applying peak fitting equations using the method of least squares. Initial favorable results were achieved when samples containing radon progeny were decayed 15 to 30 min, and samples containing both radon and thoron progeny were decayed at least 60 min. The effort indicates that timely decisions can be made when determining transuranic activity using available alpha CAMs with alpha spectroscopy capabilities for counting retrospective air samples if accompanied by analyses that consider the characteristics of serial decay.
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
CADAVERIC STUDY ON THE LEARNING CURVE OF THE TWO-APPROACH GANZ PERIACETABULAR OSTEOTOMY
Ferro, Fernando Portilho; Ejnisman, Leandro; Miyahara, Helder Souza; Trindade, Christiano Augusto de Castro; Faga, Antônio; Vicente, José Ricardo Negreiros
2016-01-01
Objective : The Bernese periacetabular osteotomy (PAO) is a widely used technique for the treatment of non-arthritic, dysplastic, painful hips. It is considered a highly complex procedure with a steep learning curve. In an attempt to minimize complications, a double anterior-posterior approach has been described. We report on our experience while performing this technique on cadaveric hips followed by meticulous dissection to verify possible complications. Methods : We operated on 15 fresh cadaveric hips using a combined posterior Kocher-Langenbeck and an anterior Smith-Petersen approach, without fluoroscopic control. The PAO cuts were performed and the acetabular fragment was mobilized. A meticulous dissection was carried out to verify the precision of the cuts. Results : Complications were observed in seven specimens (46%). They included a posterior column fracture, and posterior and anterior articular fractures. The incidence of complications decreased over time, from 60% in the first five procedures to 20% in the last five procedures. Conclusions : We concluded that PAO using a combined anterior-posterior approach is a reproducible technique that allows all cuts to be done under direct visualization. The steep learning curve described in the classic single incision approach was also observed when using two approaches. Evidence Level: IV, Cadaveric Study. PMID:26981046
B-737 flight test of curved-path and steep-angle approaches using MLS guidance
NASA Technical Reports Server (NTRS)
Branstetter, J. R.; White, W. F.
1989-01-01
A series of flight tests were conducted to collect data for jet transport aircraft flying curved-path and steep-angle approaches using Microwave Landing System (MLS) guidance. During the test, 432 approaches comprising seven different curved-paths and four glidepath angles varying from 3 to 4 degrees were flown in NASA Langley's Boeing 737 aircraft (Transport Systems Research Vehicle) using an MLS ground station at the NASA Wallops Flight Facility. Subject pilots from Piedmont Airlines flew the approaches using conventional cockpit instrumentation (flight director and Horizontal Situation Indicator (HSI). The data collected will be used by FAA procedures specialists to develop standards and criteria for designing MLS terminal approach procedures (TERPS). The use of flight simulation techniques greatly aided the preliminary stages of approach development work and saved a significant amount of costly flight time. This report is intended to complement a data report to be issued by the FAA Office of Aviation Standards which will contain all detailed data analysis and statistics.
Fernández-Portales, Javier; Valdesuso, Raúl; Carreras, Raúl; Jiménez-Candil, Javier; Serrador, Ana; Romaní, Sebastián
2006-10-01
There are anatomical differences between right and left radial artery approaches for coronary catheterization that could influence application of the technique. We present the results of a randomized study that compared the effectiveness of the two approaches and identified factors associated with failure of the procedure. The study involved 351 consecutive patients: a left radial approach was used in 180, and a right radial approach, in 171. The procedure could not be completed using the initial approach selected in 15 patients (11 right radial vs. 4 left radial; P=.007). Use of a right radial approach, lack of catheterization experience, patient age >70 years, and the absence of hypertension were found to be independently associated with prolonged fluoroscopy duration and failure using the initial approach. Use of the right radial approach in patients aged over 70 years was associated with a 6-fold increase in the risk of an adverse event. Consequently, use of the right radial approach should be avoided in patients aged over 70 years when trainee practitioners are on the learning curve.
A new semi-empirical approach to performance curves of polymer electrolyte fuel cells
NASA Astrophysics Data System (ADS)
Pisani, L.; Murgia, G.; Valentini, M.; D'Aguanno, B.
We derive a semi-empirical equation to describe the performance curves of polymer electrolyte membrane fuel cells (PEMFCs). The derivation is based on the observation that the main non-linear contributions to the cell voltage deterioration of H 2/air feed cells are deriving from the cathode reactive region. To evaluate such contributions we assumed that the diffusion region of the cathode is made by a network of pores able to transport gas and liquid mixtures, while the reactive region is made by a different network of pores for gas transport in a liquid permeable matrix. The mathematical model is largely mechanistic, with most terms deriving from phenomenological mass transport and conservation equations. The only full empirical term in the performance equation is the Ohmic overpotential, which is assumed to be linear with the cell current density. The resulting equation is similar to other published performance equations but with the advantage of having coefficients with a precise physical origin, and a precise physical meaning. Our semi-empirical equation is used to fit several set of published experimental data, and the fits showed always a good agreement between the model results and the experimental data. The values of the fitting coefficients, together with their associated physical meaning, allow us to asses and quantify the phenomenology which is set on in the cathode as the cell current density is increased. More precisely, we observe the development of the flooding and of the local decrease of the oxygen concentration. Further developments of such a model for the cathode compartment of the fuel cell are discussed.
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang
2012-07-01
The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization.
NASA Astrophysics Data System (ADS)
Koohestani, Behrooz; Corne, David W.
2009-04-01
The Bandwidth Minimization Problem (BMP) is a graph layout problem which is known to be NP-complete. Since 1960, a considerable number of algorithms have been developed for addressing the BMP. At present, meta-heuristics (such as evolutionary algorithms and tabu search) are popular and successful approaches to the BMP. In such algorithms, the design of the fitness function (i.e. the metric that attempts to guide the search towards high-quality solutions) plays a key role in performance; the fitness function, along with the operators, induce the `search landscape', and careful attention to these issues may lead to landscapes that are more amenable to successful search. For example, rather than simply use the most obvious quality measure (in this case, the bandwidth itself), it is often helpful to design a more informative measure, indicating not only a solutions quality, but also encapsulating (for example) an indication of how distant this particular solution is from even better solutions. In this paper, a new fitness function and an associated new mutation operator are presented for BMP. These are incorporated within a simple Evolutionary Algorithm (EA), and evaluated on a set of 27 instances of the BMP (from the Harwell-Boeing sparse matrix collection). The results of this EA are compared with results obtained by using the standard fitness function (used in almost all previous researches on metaheuristics applied to the BMP). The results indicate clearly that the new fitness function and operator performed provide significantly superior results in the reduction of bandwidth.
Aerobic fitness ecological validity in elite soccer players: a metabolic power approach.
Manzi, Vincenzo; Impellizzeri, Franco; Castagna, Carlo
2014-04-01
The aim of this study was to examine the association between match metabolic power (MP) categories and aerobic fitness in elite-level male soccer players. Seventeen male professional soccer players were tested for VO2max, maximal aerobic speed (MAS), VO2 at ventilatory threshold (VO2VT and %VO2VT), and speed at a selected blood lactate concentration (4 mmol·L(-1), V(L4)). Aerobic fitness tests were performed at the end of preseason and after 12 and 24 weeks during the championship. Aerobic fitness and MP variables were considered as mean of all seasonal testing and of 16 Championship home matches for all the calculations, respectively. Results showed that VO2max (from 0.55 to 0.68), MAS (from 0.52 to 0.72), VO2VT (from 0.72 to 0.83), %VO2maxVT (from 0.62 to 0.65), and V(L4) (from 0.56 to 0.73) were significantly (p < 0.05 to 0.001) large to very large associated with MP variables. These results provide evidence to the ecological validity of aerobic fitness in male professional soccer. Strength and conditioning professionals should consider aerobic fitness in their training program when dealing with professional male soccer players. The MP method resulted an interesting approach for tracking external load in male professional soccer players.
Using a Space Filling Curve Approach for the Management of Dynamic Point Clouds
NASA Astrophysics Data System (ADS)
Psomadaki, S.; van Oosterom, P. J. M.; Tijssen, T. P. M.; Baart, F.
2016-10-01
Point cloud usage has increased over the years. The development of low-cost sensors makes it now possible to acquire frequent point cloud measurements on a short time period (day, hour, second). Based on the requirements coming from the coastal monitoring domain, we have developed, implemented and benchmarked a spatio-temporal point cloud data management solution. For this reason, we make use of the flat model approach (one point per row) in an Index Organised Table within a RDBMS and an improved spatio-temporal organisation using a Space Filling Curve approach. Two variants coming from two extremes of the space-time continuum are also taken into account, along with two treatments of the z dimension: as attribute or as part of the space filling curve. Through executing a benchmark we elaborate on the performance - loading and querying time -, and storage required by those different approaches. Finally, we validate the correctness and suitability of our method, through an out-of-the-box way of managing dynamic point clouds.
NASA Technical Reports Server (NTRS)
Benner, M. S.; Sawyer, R. H.; Mclaughlin, M. D.
1973-01-01
A real-time, fixed-base simulation study has been conducted to determine the curved, descending approach paths (within passenger-comfort limits) that would be acceptable to pilots, the flight-director-system logic requirements for curved-flight-path guidance, and the paths which can be flown within proposed microwave landing system (MLS) coverage angles. Two STOL aircraft configurations were used in the study. Generally, no differences in the results between the two STOL configurations were found. The investigation showed that paths with a 1828.8 meter turn radius and a 1828.8 meter final-approach distance were acceptable without winds and with winds up to at least 15 knots for airspeeds from 75 to 100 knots. The altitude at roll-out from the final turn determined which final-approach distances were acceptable. Pilots preferred to have an initial straight leg of about 1 n. mi. after MLS guidance acquisition before turn intercept. The size of the azimuth coverage angle necessary to meet passenger and pilot criteria depends on the size of the turn angle: plus or minus 60 deg was adequate to cover all paths execpt ones with a 180 deg turn.
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A.
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus “infusing” age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs. PMID:26697422
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests.
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus "infusing" age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs.
Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E
2012-12-01
An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures.
Lin, Shan-Yang; Lin, Hong-Liang; Chi, Ying-Ting; Huang, Yu-Ting; Kao, Chi-Yu; Hsieh, Wei-Hsien
2015-12-30
The amorphous form of a drug has higher water solubility and faster dissolution rate than its crystalline form. However, the amorphous form is less thermodynamically stable and may recrystallize during manufacturing and storage. Maintaining the amorphous state of drug in a solid dosage form is extremely important to ensure product quality. The purpose of this study was to quantitatively determine the amount of amorphous indomethacin (INDO) formed in the Soluplus® solid dispersions using thermoanalytical and Fourier transform infrared (FTIR) spectral curve-fitting techniques. The INDO/Soluplus® solid dispersions with various weight ratios of both components were prepared by air-drying and heat-drying processes. A predominate IR peak at 1683cm(-1) for amorphous INDO was selected as a marker for monitoring the solid state of INDO in the INDO/Soluplus® solid dispersions. The physical stability of amorphous INDO in the INDO/Soluplus® solid dispersions prepared by both drying processes was also studied under accelerated conditions. A typical endothermic peak at 161°C for γ-form of INDO (γ-INDO) disappeared from all the differential scanning calorimetry (DSC) curves of INDO/Soluplus® solid dispersions, suggesting the amorphization of INDO caused by Soluplus® after drying. In addition, two unique IR peaks at 1682 (1681) and 1593 (1591)cm(-1) corresponded to the amorphous form of INDO were observed in the FTIR spectra of all the INDO/Soluplus® solid dispersions. The quantitative amounts of amorphous INDO formed in all the INDO/Soluplus® solid dispersions were increased with the increase of γ-INDO loaded into the INDO/Soluplus® solid dispersions by applying curve-fitting technique. However, the intermolecular hydrogen bonding interaction between Soluplus® and INDO were only observed in the samples prepared by heat-drying process, due to a marked spectral shift from 1636 to 1628cm(-1) in the INDO/Soluplus® solid dispersions. The INDO/Soluplus® solid
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.
2015-12-01
Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.
Corvettes, Curve Fitting, and Calculus
ERIC Educational Resources Information Center
Murawska, Jaclyn M.; Nabb, Keith A.
2015-01-01
Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
NASA Astrophysics Data System (ADS)
Akhunov, T. A.; Wertz, O.; Elyiv, A.; Gaisin, R.; Artamonov, B. P.; Dudinov, V. N.; Nuritdinov, S. N.; Delvaux, C.; Sergeyev, A. V.; Gusev, A. S.; Bruevich, V. V.; Burkhonov, O.; Zheleznyak, A. P.; Ezhkova, O.; Surdej, J.
2017-03-01
We present new photometric observations of H1413+117 acquired during seasons between 2001 and 2008 in order to estimate the time delays between the lensed quasar images and to characterize at best the on-going micro-lensing events. We propose a highly performing photometric method called the adaptive point spread function fitting and have successfully tested this method on a large number of simulated frames. This has enabled us to estimate the photometric error bars affecting our observational results. We analysed the V- and R-band light curves and V-R colour variations of the A-D components which show short- and long-term brightness variations correlated with colour variations. Using the χ2 and dispersion methods, we estimated the time delays on the basis of the R-band light curves over the seasons between 2003 and 2006. We have derived the new values: ΔtAB = -17.4 ± 2.1, ΔtAC = -18.9 ± 2.8 and ΔtAD = 28.8 ± 0.7 d using the χ2 method (B and C are leading, D is trailing) with 1σ confidence intervals. We also used available observational constraints (resp. the lensed image positions, the flux ratios in mid-IR and two sets of time delays derived in the present work) to update the lens redshift estimation. We obtained z_l = 1.95^{+0.06}_{-0.10} which is in good agreement with previous estimations. We propose to characterize two kinds of micro-lensing events: micro-lensing for the A, B, C components corresponds to typical variations of ∼10-4 mag d-1 during all the seasons, while the D component shows an unusually strong micro-lensing effect with variations of up to ∼10-3 mag d-1 during 2004 and 2005.
New approach in the evaluation of a fitness program at a worksite.
Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T
1999-03-01
The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.
Real time refractivity from clutter using a best fit approach improved with physical information
NASA Astrophysics Data System (ADS)
Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph
2010-02-01
Refractivity from clutter (RFC) retrieves the radio frequency refractive conditions along a propagation path by inverting the measured radar sea clutter return. In this paper, a real-time RFC technique is proposed called "Improved Best Fit" (IBF). It is based on finding the environment with best fit to one of many precomputed, modeled radar returns for different environments in a database. The method is improved by considering the mean slope of the propagation factor, and physical considerations are added: smooth variations of refractive conditions with azimuth and smooth variations of duct height with range. The approach is tested on data from 1998 Wallops Island, Virginia, measurement campaign with good results on most of the data, and questionable results are detected with a confidence criterion. A comparison between the refractivity structures measured during the measurement campaign and the ones retrieved by inversion shows a good match. Radar coverage simulations obtained from these inverted refractivity structures demonstrate the potential utility of IBF.
NASA Astrophysics Data System (ADS)
Khalil-Ur-Rehman; Malik, M. Y.; Bilal, S.; Bibi, M.; Ali, U.
The present analysis is made to envision the characteristics of thermal and solutal stratification on magneto-hydrodynamic mixed convection boundary layer stagnation point flow of non-Newtonian fluid by way of an inclined cylindrical stretching surface. Flow exploration is manifested with heat generation process. The magnitude of temperature and concentration nearby an inclined cylindrical surface is supposed to be higher in strength as compared to the ambient fluid. A suitable similarity transformation is applied to transform the flow conducting equations (mathematically modelled) into system of coupled non-linear ordinary differential equations. The numerical computations are made for these subsequent coupled equations with the source of shooting scheme charted with fifth order Runge-Kutta algorithm. A logarithmic way of study is executed to inspect the impact of various pertinent flow controlling parameters on the dimensionless velocity, temperature and concentration distributions. Further, straight line and parabolic curve fitting is presented for skin friction coefficient, heat and mass transfer rate. It seems to be first step in this direction and will serve as a helping source for the preceding studies.
NASA Technical Reports Server (NTRS)
White, W. F. (Compiler)
1978-01-01
The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.
IEFIT - An Interactive Approach to High Temperature Fusion Plasma Magnetic Equilibrium Fitting
Peng, Q.; Schachter, J.; Schissel, D.P.; Lao, L.L.
1999-06-01
An interactive IDL based wrapper, IEFIT, has been created for the magnetic equilibrium reconstruction code EFIT written in FORTRAN. It allows high temperature fusion physicists to rapidly optimize a plasma equilibrium reconstruction by eliminating the unnecessarily repeated initialization in the conventional approach along with the immediate display of the fitting results of each input variation. It uses a new IDL based graphics package, GaPlotObj, developed in cooperation with Fanning Software Consulting, that provides a unified interface with great flexibility in presenting and analyzing scientific data. The overall interactivity reduces the process to minutes from the usual hours.
The BestFIT trial: A SMART Approach to Developing Individualized Weight Loss Treatments
Sherwood, Nancy E.; Butryn, Meghan L.; Forman, Evan M.; Almirall, Daniel; Seburg, Elisabeth M.; Crain, A Lauren; Kunin-Batson, Alicia S; Hayes, Marcia G.; Levy, Rona L; Jeffery, Robert W.
2016-01-01
Behavioral weight loss programs help people achieve clinically meaningful weight losses (8–10% of starting body weight). Despite data showing that only half of participants achieve this goal, a “one size fits all” approach is normative. This weight loss intervention science gap calls for adaptive interventions that provide the “right treatment at the right time for the right person.” Sequential Multiple Assignment Randomized Trials (SMART), use experimental design principles to answer questions for building adaptive interventions including whether, how, or when to alter treatment intensity, type, or delivery. This paper describes the rationale and design of the BestFIT study, a SMART designed to evaluate the optimal timing for intervening with sub-optimal responders to weight loss treatment and relative efficacy of two treatments that address self-regulation challenges which impede weight loss: 1) augmenting treatment with portion-controlled meals (PCM) which decrease the need for self-regulation; and 2) switching to acceptance-based behavior treatment (ABT) which boosts capacity for self-regulation. The primary aim is to evaluate the benefit of changing treatment with PCM versus ABT. The secondary aim is to evaluate the best time to intervene with sub-optimal responders. BestFIT results will lead to the empirically-supported construction of an adaptive intervention that will optimize weight loss outcomes and associated health benefits. PMID:26825020
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic
2015-04-01
Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24
Aldridge, C.L.; Boyce, M.S.
2007-01-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use
Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil
2013-01-01
The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648
Reconstruction of Galaxy Star Formation Histories through SED Fitting:The Dense Basis Approach
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric
2017-04-01
We introduce the dense basis method for Spectral Energy Distribution (SED) fitting. It accurately recovers traditional SED parameters, including M *, SFR, and dust attenuation, and reveals previously inaccessible information about the number and duration of star formation episodes and the timing of stellar mass assembly, as well as uncertainties in these quantities. This is done using basis star formation histories (SFHs) chosen by comparing the goodness-of-fit of mock galaxy SEDs to the goodness-of-reconstruction of their SFHs. We train and validate the method using a sample of realistic SFHs at z = 1 drawn from stochastic realizations, semi-analytic models, and a cosmological hydrodynamical galaxy formation simulation. The method is then applied to a sample of 1100 CANDELS GOODS-S galaxies at 1< z< 1.5 to illustrate its capabilities at moderate S/N with 15 photometric bands. Of the six parametrizations of SFHs considered, we adopt linear-exponential, bessel-exponential, log-normal, and Gaussian SFHs, and reject the traditional parametrizations of constant (Top-Hat) and exponential SFHs. We quantify the bias and scatter of each parametrization. 15% of galaxies in our CANDELS sample exhibit multiple episodes of star formation, with this fraction decreasing above {M}* > {10}9.5 {M}ȯ . About 40% of the CANDELS galaxies have SFHs whose maximum occurs at or near the epoch of observation. The dense basis method is scalable and offers a general approach to a broad class of data-science problems.
A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Fouesneau, Morgan; Price-Whelan, Adrian M.; Bailer-Jones, Coryn A. L.; Gould, Andy; Rix, Hans-Walter
2017-04-01
Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W1 and W2 bands, using Tycho-Gaia Astrometric Solution (TGAS) parallax measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1σ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.
Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.
Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis
2006-02-01
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation.
Exploring Person Fit with an Approach Based on Multilevel Logistic Regression
ERIC Educational Resources Information Center
Walker, A. Adrienne; Engelhard, George, Jr.
2015-01-01
The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…
Johann, C; Garidel, P; Mennicke, L; Blume, A
1996-01-01
A simulation program using least-squares minimization was developed to calculate and fit heat capacity (cp) curves to experimental thermograms of dilute aqueous dispersions of phospholipid mixtures determined by high-sensitivity differential scanning calorimetry. We analyzed cp curves and phase diagrams of the pseudobinary aqueous lipid systems 1,2-dimyristoyl-sn-glycero-3-phosphatidylglycerol/ 1,2-dipalmitoyl-sn-glycero-3phosphatidylcholine (DMPG/DPPC) and 1,2-dimyristoyl-sn-glycero-3-phosphatidic acid/1, 2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DMPA/DPPC) at pH 7. The simulation of the cp curves is based on regular solution theory using two nonideality parameters rho g and rho l for symmetric nonideal mixing in the gel and the liquid-crystalline phases. The broadening of the cp curves owing to limited cooperativity is incorporated into the simulation by convolution of the cp curves calculated for infinite cooperativity with a broadening function derived from a simple two-state transition model with the cooperative unit size n = delta HVH/delta Hcal as an adjustable parameter. The nonideality parameters and the cooperative unit size turn out to be functions of composition. In a second step, phase diagrams were calculated and fitted to the experimental data by use of regular solution theory with four different model assumptions. The best fits were obtained with a four-parameter model based on nonsymmetric, nonideal mixing in both phases. The simulations of the phase diagrams show that the absolute values of the nonideality parameters can be changed in a certain range without large effects on the shape of the phase diagram as long as the difference of the nonideality parameters for rho g for the gel and rho l for the liquid-crystalline phase remains constant. The miscibility in DMPG/DPPC and DMPA/DPPC mixtures differs remarkably because, for DMPG/DPPC, delta rho = rho l -rho g is negative, whereas for DMPA/DPPC this difference is positive. For DMPA/DPPC, this
NASA Astrophysics Data System (ADS)
Mikhasenko, Mikhail; Jackura, Andrew; Ketzer, Bernhard; Szczepaniak, Adam
2017-03-01
We derive a unitarized model for the peripheral production of the three-pion system in the isobar approximation. The production process takes into account long-range t-channel pion exchange. The K-matrix approach is chosen for the parameterization of the scattering amplitude. Five coupled channels are used to fit the COMPASS spin-density matrices for the JPCMɛ = 2-+0+ sector. Preliminary results of the fit are presented.
More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/
Carlson, K.W.; Williams, J.A.
1980-02-01
Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension.
Morphing ab initio potential energy curve of beryllium monohydride
NASA Astrophysics Data System (ADS)
Špirko, Vladimír
2016-12-01
Effective (mass-dependent) potential energy curves of the ground electronic states of 9BeH, 9BeD, and 9BeT are constructed by morphing a very accurate MR-ACPF ab initio potential of Koput (2011) within the framework of the reduced potential energy curve approach of Jenč (1983). The morphing is performed by fitting the RPC parameters to available experimental ro-vibrational data. The resulting potential energy curves provide a fairly quantitative reproduction of the fitted data. This allows for a reliable prediction of the so-far unobserved molecular states in terms of only a small number of fitting parameters.
R-Curve Approach to Describe the Fracture Resistance of Tool Steels
NASA Astrophysics Data System (ADS)
Picas, Ingrid; Casellas, Daniel; Llanes, Luis
2016-06-01
This work addresses the events involved in the fracture of tool steels, aiming to understand the effect of primary carbides, inclusions, and the metallic matrix on their effective fracture toughness and strength. Microstructurally different steels were investigated. It is found that cracks nucleate on carbides or inclusions at stress values lower than the fracture resistance. It is experimentally evidenced that such cracks exhibit an increasing growth resistance as they progressively extend, i.e., R-curve behavior. Ingot cast steels present a rising R-curve, which implies that the effective toughness developed by small cracks is lower than that determined with long artificial cracks. On the other hand, cracks grow steadily in the powder metallurgy tool steel, yielding as a result a flat R-curve. Accordingly, effective toughness for this material is mostly independent of the crack size. Thus, differences in fracture toughness values measured using short and long cracks must be considered when assessing fracture resistance of tool steels, especially when tool performance is controlled by short cracks. Hence, material selection for tools or development of new steel grades should take into consideration R-curve concepts, in order to avoid unexpected tool failures or to optimize microstructural design of tool steels, respectively.
A Model-Based Approach to Goodness-of-Fit Evaluation in Item Response Theory
ERIC Educational Resources Information Center
Oberski, Daniel L.; Vermunt, Jeroen K.
2013-01-01
These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…
Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test
ERIC Educational Resources Information Center
Garn, Alex; Sun, Haichun
2009-01-01
The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…
ERIC Educational Resources Information Center
Pargament, Kenneth I.; Sweeney, Patrick J.
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…
Rather, Manzoor A; Bhat, Bilal A; Qurishi, Mushtaq A
2013-12-15
Natural product based drugs constitute a substantial proportion of the pharmaceutical market particularly in the therapeutic areas of infectious diseases and oncology. The primary focus of any drug development program so far has been to design selective ligands (drugs) that act on single selective disease targets to obtain highly efficacious and safe drugs with minimal side effects. Although this approach has been successful for many diseases, yet there is a significant decline in the number of new drug candidates being introduced into clinical practice over the past few decades. This serious innovation deficit that the pharmaceutical industries are facing is due primarily to the post-marketing failures of blockbuster drugs. Many analysts believe that the current capital-intensive model-"the one drug to fit all" approach will be unsustainable in future and that a new "less investment, more drugs" model is necessary for further scientific growth. It is now well established that many diseases are multi-factorial in nature and that cellular pathways operate more like webs than highways. There are often multiple ways or alternate routes that may be switched on in response to the inhibition of a specific target. This gives rise to the resistant cells or resistant organisms under the specific pressure of a targeted agent, resulting in drug resistance and clinical failure of the drug. Drugs designed to act against individual molecular targets cannot usually combat multifactorial diseases like cancer, or diseases that affect multiple tissues or cell types such as diabetes and immunoinflammatory diseases. Combination drugs that affect multiple targets simultaneously are better at controlling complex disease systems and are less prone to drug resistance. This multicomponent therapy forms the basis of phytotherapy or phytomedicine where the holistic therapeutic effect arises as a result of complex positive (synergistic) or negative (antagonistic) interactions between
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
Han, Yunkun; Han, Zhanwen E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
3D Modeling of Spectra and Light Curves of Hot Jupiters with PHOENIX; a First Approach
NASA Astrophysics Data System (ADS)
Jiménez-Torres, J. J.
2016-04-01
A detailed global circulation model was used to feed the PHOENIX code and calculate 3D spectra and light curves of hot Jupiters. Cloud free and dusty radiative fluxes for the planet HD179949b were modeled to show differences between them. The PHOENIX simulations can explain the broad features of the observed 8 μm light curves, including the fact that the planet-star flux ratio peaks before the secondary eclipse. The PHOENIX reflection spectrum matches the Spitzer secondary-eclipse depth at 3.6 μm and underpredicts eclipse depths at 4.5, 5.8 and 8.0 μm. These discrepancies result from the chemical composition and suggest the incorporation of different metallicities in future studies.
Schumacher, Jonathan A; Scott Reading, N; Szankasi, Philippe; Matynia, Anna P; Kelley, Todd W
2015-08-01
Acute myeloid leukemia patients with recurrent cytogenetic abnormalities including inv(16);CBFB-MYH11 and t(15;17);PML-RARA may be assessed by monitoring the levels of the corresponding abnormal fusion transcripts by quantitative reverse transcription-PCR (qRT-PCR). Such testing is important for evaluating the response to therapy and for the detection of early relapse. Existing qRT-PCR methods are well established and in widespread use in clinical laboratories but they are laborious and require the generation of standard curves. Here, we describe a new method to quantitate fusion transcripts in acute myeloid leukemia by qRT-PCR without the need for standard curves. Our approach uses a plasmid calibrator containing both a fusion transcript sequence and a reference gene sequence, representing a perfect normalized copy number (fusion transcript copy number/reference gene transcript copy number; NCN) of 1.0. The NCN of patient specimens can be calculated relative to that of the single plasmid calibrator using experimentally derived PCR efficiency values. We compared the data obtained using the plasmid calibrator method to commercially available assays using standard curves and found that the results obtained by both methods are comparable over a broad range of values with similar sensitivities. Our method has the advantage of simplicity and is therefore lower in cost and may be less subject to errors that may be introduced during the generation of standard curves.
Meites, L; Barry, D M
1973-11-01
A new technique for distinguishing diacidic from monoacidic weak bases (or dibasic from monobasic weak acids) is based on fitting the data obtained in a potentiometric acid-base titration to theoretical equations for the titration of a monoacidic base (or monobasic acid). If the substance titrated is not monofunctional the best fit to these equations will involve systematic deviations that, when plotted against the volume of reagent, yield a "deviation pattern" with a shape characteristic of polyfunctional behaviour. Ancillary criteria based on the values of the parameters obtained from the fit are also described. There is a range of uncertainty associated with each of these criteria in which the ratios of successive dissociation constants are so close to the statistical values that it is impossible in the face of the errors of measurement to decide whether the substance is monofunctional or polyfunctional. If the data from one titration prove to lie within that range, the decision may be based on the results of a second titration performed at a different ionic strength. Further fitting to the equations describing more complex behaviour provides a basis for distinguishing non-statistical difunctional substances from trifunctional ones, trifunctional ones from tetrafunctional ones, and so on.
NASA Astrophysics Data System (ADS)
Menafoglio, A.; Guadagnini, A.; Secchi, P.
2016-08-01
We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.
NASA Astrophysics Data System (ADS)
Chavanis, Pierre-Henri; Matos, Tonatiuh
2017-01-01
We develop a hydrodynamic representation of the Klein-Gordon-Maxwell-Einstein equations. These equations combine quantum mechanics, electromagnetism, and general relativity. We consider the case of an arbitrary curved spacetime, the case of weak gravitational fields in a static or expanding background, and the nonrelativistic (Newtonian) limit. The Klein-Gordon-Maxwell-Einstein equations govern the evolution of a complex scalar field, possibly describing self-gravitating Bose-Einstein condensates, coupled to an electromagnetic field. They may find applications in the context of dark matter, boson stars, and neutron stars with a superfluid core.
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-04-10
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority ({approx}90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
Yap, John Stephen; Wang, Chenguang; Wu, Rongling
2007-01-01
Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL) with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm. PMID:17579725
Srivastava, Aneesh; Sureka, Sanjoy Kumar; Vashishtha, Saurabh; Agarwal, Shikhar; Ansari, Md Saleh; Kumar, Manoj
2016-01-01
CONTEXT: The retroperitoneoscopic or retroperitoneal (RP) surgical approach has not become as popular as the transperitoneal (TP) one due to the steeper learning curve. AIMS: Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS) performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. SETTINGS AND DESIGN: This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. PATIENTS AND METHODS: Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC). RESULTS: RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP); 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5%) had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. CONCLUSION: Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve. PMID:27073300
Gabrieli, Andrea; Sant, Marco; Demontis, Pierfranco; Suffritti, Giuseppe B
2015-08-11
Two major improvements to the state-of-the-art Repeating Electrostatic Potential Extracted Atomic (REPEAT) method, for generating accurate partial charges for molecular simulations of periodic structures, are here developed. The first, D-REPEAT, consists in the simultaneous fit of the electrostatic potential (ESP), together with the total dipole fluctuations (TDF) of the framework. The second, M-REPEAT, allows the fit of multiple ESP configurations at once. When both techniques are fused into one, DM-REPEAT method, the resulting charges become remarkably stable over a large set of fitting regions, giving a robust and physically sound solution to the buried atoms problem. The method capabilities are extensively studied in ZIF-8 framework, and subsequently applied to IRMOF-1 and ITQ-29 crystal structures. To our knowledge, this is the first time that this approach is proposed in the context of periodic systems.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account.
Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach
Hawkley, Louise C.; Capitanio, John P.
2015-01-01
Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship—as indexed by perceived social isolation (i.e. loneliness)—with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary–adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a ‘lonely’ phenotype in social species, and its consequences for health and fitness. PMID:25870400
Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús
2017-04-01
Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples.
Factors influencing community health centers' efficiency: a latent growth curve modeling approach.
Marathe, Shriram; Wan, Thomas T H; Zhang, Jackie; Sherin, Kevin
2007-10-01
The objective of study is to examine factors affecting the variation in technical and cost efficiency of community health centers (CHCs). A panel study design was formulated to examine the relationships among the contextual, organizational structural, and performance variables. Data Envelopment Analysis (DEA) of technical efficiency and latent growth curve modeling of multi-wave technical and cost efficiency were performed. Regardless of the efficiency measures, CHC efficiency was influenced more by contextual factors than organizational structural factors. The study confirms the independent and additive influences of contextual and organizational predictors on efficiency. The change in CHC technical efficiency positively affects the change in CHC cost efficiency. The practical implication of this finding is that healthcare managers can simultaneously optimize both technical and cost efficiency through appropriate use of inputs to generate optimal outputs. An innovative solution is to employ decision support software to prepare an expert system to assist poorly performing CHCs to achieve better cost efficiency through optimizing technical efficiency.
Searching events in AFM force-extension curves: A wavelet approach.
Benítez, R; Bolós, V J
2017-01-01
An algorithm, based on the wavelet scalogram energy, for automatically detecting events in force-extension AFM force spectroscopy experiments is introduced. The events to be detected are characterized by a discontinuity in the signal. It is shown how the wavelet scalogram energy has different decay rates at different points depending on the degree of regularity of the signal, showing faster decay rates at regular points and slower rates at singular points (jumps). It is shown that these differences produce peaks in the scalogram energy plot at the event points. Finally, the algorithm is illustrated in a tether analysis experiment by using it for the detection of events in the AFM force-extension curves susceptible to being considered tethers. Microsc. Res. Tech. 80:153-159, 2017. © 2016 Wiley Periodicals, Inc.
Boerrigter-Eenling, Rita; Alewijn, Martin; Weesepoel, Yannick; van Ruth, Saskia
2017-04-01
Fresh/chilled chicken breasts retail at a higher price than their frozen/thawed counterparts. Verification of the fresh/thawed status of chicken meat is determined by measuring β-hydroxyacyl-Coenzyme A-hydrogenase (HADH) activity present in meat intra-cellular liquids spectrophotometrically. However, considerable numbers of reference samples are required for the current arithmetic method, adding to laboratory costs. Therefore, two alternative mathematical approaches which do not require such reference samples were developed and evaluated: curve fitting and multivariate classification. The approaches were developed using 55 fresh/thawed fillet samples. The performance of the methods was examined by an independent validation set which consisted of 16 samples. Finally, the approach was tested in practice in a market study. With the exception of two minor false classifications, both newly proposed methods performed equally well as the classical method. All three methods were able to identify two apparent fraudulent cases in the market study. Therefore, the experiments showed that the costs of HADH measurements can be reduced by adapting alternative mathematics.
Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach
ERIC Educational Resources Information Center
Bjerke, Wendy
2013-01-01
The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…
Lu, Yehu; Song, Guowen; Li, Jun
2014-11-01
The garment fit played an important role in protective performance, comfort and mobility. The purpose of this study is to quantify the air gap to quantitatively characterize a three-dimensional (3-D) garment fit using a 3-D body scanning technique. A method for processing of scanned data was developed to investigate the air gap size and distribution between the clothing and human body. The mesh model formed from nude and clothed body was aligned, superimposed and sectioned using Rapidform software. The air gap size and distribution over the body surface were analyzed. The total air volume was also calculated. The effects of fabric properties and garment size on air gap distribution were explored. The results indicated that average air gap of the fit clothing was around 25-30 mm and the overall air gap distribution was similar. The air gap was unevenly distributed over the body and it was strongly associated with the body parts, fabric properties and garment size. The research will help understand the overall clothing fit and its association with protection, thermal and movement comfort, and provide guidelines for clothing engineers to improve thermal performance and reduce physiological burden.
Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach
ERIC Educational Resources Information Center
Perkmen, Serkan; Cevik, Beste; Alkan, Mahir
2012-01-01
Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
ERIC Educational Resources Information Center
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-01-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…
Hydrothermal germination models: comparison of two data-fitting approaches with probit optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
Probit models for estimating hydrothermal germination rate yield model parameters that have been associated with specific physiological processes. The desirability of linking germination response to seed physiology must be weighed against expectations of model fit and the relative accuracy of predi...
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-06-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The aim of this study was to perform a visual analysis of this relationship with self-organizing maps and to monitor the subject's evolution during the 4 years of secondary school. Four hundred and forty-four students participated in the study. The physical activity and physical fitness of the participants were measured, and the participants' grade point averages were obtained from the five participant institutions. Four main clusters representing two primary student profiles with few differences between boys and girls were observed. The clustering demonstrated that students with higher energy expenditure and better physical fitness exhibited lower body mass index (BMI) and higher academic performance, whereas those adolescents with lower energy expenditure exhibited worse physical fitness, higher BMI and lower academic performance. With respect to the evolution of the students during the 4 years, ∼25% of the students originally clustered in a negative profile moved to a positive profile, and there was no movement in the opposite direction.
[Methodic approaches to determining the level of occupational fitness in jobs prone to trauma].
Iushkova, O I; Matiukhin, V V; Poroshenko, A S; Iampol'skaia, E G
2006-01-01
The article deals with main methods to evaluate functional state of various body systems and with the methods-based criteria of occupational fitness for miscellaneous activities. Examination of 11 types of jobs prone to trauma helped to specify integral parameter for occupational selection.
Computerized detection of retina blood vessel using a piecewise line fitting approach
NASA Astrophysics Data System (ADS)
Gu, Suicheng; Zhen, Yi; Wang, Ningli; Pu, Jiantao
2013-03-01
Retina vessels are important landmarks in fundus images, an accurate segmentation of the vessels may be useful for automated screening for several eye diseases or systematic diseases, such as diebetes. A new method is presented for automated segmentation of blood vessels in two-dimensional color fundus images. First, a coherence filter and a followed mean filter are applied to the green channel of the image. The green channel is selected because the vessels have the maximal contrast at the green channel. The coherence filter is to enhance the line strength of the original image and the mean filter is to discard the intensity variance among different regions. Since the vessels are darker than the around tissues depicted on the image, the pixels with small intensity are then retained as points of interest (POI). A new line fitting algorithm is proposed to identify line-like structures in each local circle of the POI. The proposed line fitting method is less sensitive to noise compared to the least squared fitting. The fitted lines with higher scores are regarded as vessels. To evaluate the performance of the proposed method, a public available database DRIVE with 20 test images is selected for experiments. The mean accuracy on these images is 95.7% which is comparable to the state-of-art.
Robust statistical approaches for local planar surface fitting in 3D laser scanning data
NASA Astrophysics Data System (ADS)
Nurunnabi, Abdul; Belton, David; West, Geoff
2014-10-01
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature
Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study
NASA Technical Reports Server (NTRS)
Scanlon, Charles H.
1987-01-01
A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting.
NASA Astrophysics Data System (ADS)
Zhou, Yuhong
Light detection and ranging (LiDAR) waveform data have been increasingly available for performing land cover classification. To utilize waveforms, numerous studies have focused on either discretizing waveforms into multiple returns or extracting metrics from waveforms to characterize their shapes. The direct use of the waveform curve itself, which contains more comprehensive and accurate information on the vertical structure of ground features, has been scarcely investigated. The first objective of this study was to utilize the complete waveform curve directly to differentiate among objects having distinct vertical structures using different curve matching approaches. Six curve matching approaches were developed, including curve root sum squared differential area (CRSSDA), curve angle mapper (CAM), Kolmogorov-Smirnov (KS) distance, Kullback-Leibler (KL) divergence, cumulative curve root sum squared differential area (CRSSDA), and cumulative curve angle mapper (CCAM) to quantify the similarity between two full-waveforms. To evaluate the performances of curve matching approaches, a widely adopted metrics-based method was also implemented. The second objective of this study was to further incorporate spectral information from hyperspatial resolution imagery with waveform from LiDAR at the object level to achieve a more detailed land cover classification using the same curve matching approaches. To fuse LiDAR waveform and image objects, object-level pseudo-waveforms were first synthesized using discrete-return LiDAR data and then fused with the object-level spectral histograms from hyperspatial resolution WorldView-2 imagery to classify image objects using one of the curve matching approaches. Results showed that the use of the full-waveform curve to discriminate between objects with distinct vertical structures over level terrain provided an alternative to existing metrics-based methods using a limited number of parameters derived from the waveforms. By taking the
Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André
2015-07-07
The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
AGNfitter: A Bayesian MCMC Approach to Fitting Spectral Energy Distributions of AGNs
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph F.; Hogg, David W.
2016-12-01
We present AGNfitter, a publicly available open-source algorithm implementing a fully Bayesian Markov Chain Monte Carlo method to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGNs) from the sub-millimeter to the UV, allowing one to robustly disentangle the physical processes responsible for their emission. AGNfitter makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the nuclear and host galaxy emission simultaneously. The model consists of four physical emission components: an accretion disk, a torus of AGN heated dust, stellar populations, and cold dust in star-forming regions. AGNfitter determines the posterior distributions of numerous parameters that govern the physics of AGNs with a fully Bayesian treatment of errors and parameter degeneracies, allowing one to infer integrated luminosities, dust attenuation parameters, stellar masses, and star-formation rates. We tested AGNfitter’s performance on real data by fitting the SEDs of a sample of 714 X-ray selected AGNs from the XMM-COSMOS survey, spectroscopically classified as Type1 (unobscured) and Type2 (obscured) AGNs by their optical-UV emission lines. We find that two independent model parameters, namely the reddening of the accretion disk and the column density of the dusty torus, are good proxies for AGN obscuration, allowing us to develop a strategy for classifying AGNs as Type1 or Type2, based solely on an SED-fitting analysis. Our classification scheme is in excellent agreement with the spectroscopic classification, giving a completeness fraction of ˜ 86 % and ˜ 70 % , and an efficiency of ˜ 80 % and ˜ 77 % , for Type1 and Type2 AGNs, respectively.
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Computer-aided fit testing: an approach for examining the user/equipment interface
NASA Astrophysics Data System (ADS)
Corner, Brian D.; Beecher, Robert M.; Paquette, Steven
1997-03-01
Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.
Pargament, Kenneth I; Sweeney, Patrick J
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated for why spirituality is a necessary component of the CSF program: Human spirituality is a significant motivating force, spirituality is a vital resource for human development, and spirituality is a source of struggle that can lead to growth or decline. A conceptual model developed by Sweeney, Hannah, and Snider (2007) is used to identify several psychological structures and processes that facilitate the development of the human spirit. From this model, an educational, computer-based program has been developed to promote spiritual resilience. This program consists of three tiers: (a) building awareness of the self and the human spirit, (b) building awareness of resources to cultivate the human spirit, and (c) building awareness of the human spirit of others. Further research will be needed to evaluate the effectiveness of this innovative and potentially important program.
A learning curve approach to projecting cost and performance for photovoltaic technologies
NASA Astrophysics Data System (ADS)
Cody, George D.; Tiedje, Thomas
1997-04-01
The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solarex of projected sales of PV power at rates significantly less than the US utility average.
Learning curve approach to projecting cost and performance for photovoltaic technologies
NASA Astrophysics Data System (ADS)
Cody, George D.; Tiedje, Thomas
1997-10-01
The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solar of projected sales of PV power at rates significantly less than the U.S. utility average.
Inward leakage variability between respirator fit test panels - Part II. Probabilistic approach.
Liu, Yuewei; Zhuang, Ziqing; Coffey, Christopher C; Rengasamy, Samy; Niezgoda, George
2016-08-01
This study aimed to quantify the variability between different anthropometric panels in determining the inward leakage (IL) of N95 filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs). We enrolled 144 experienced and non-experienced users as subjects in this study. Each subject was assigned five randomly selected FFRs and five EHRs, and performed quantitative fit tests to measure IL. Based on the NIOSH bivariate fit test panel, we randomly sampled 10,000 pairs of anthropometric 35 and 25 member panels without replacement from the 144 study subjects. For each pair of the sampled panels, a Chi-Square test was used to test the hypothesis that the passing rates for the two panels were not different. The probability of passing the IL test for each respirator was also determined from the 20,000 panels and by using binomial calculation. We also randomly sampled 500,000 panels with replacement to estimate the coefficient of variation (CV) for inter-panel variability. For both 35 and 25 member panels, the probability that passing rates were not significantly different between two randomly sampled pairs of panels was higher than 95% for all respirators. All efficient (passing rate ≥80%) and inefficient (passing rate ≤60%) respirators yielded consistent results (probability >90%) for two randomly sampled panels. Somewhat efficient respirators (passing rate between 60% and 80%) yielded inconsistent results. The passing probabilities and error rates were found to be significantly different between the simulation and binomial calculation. The CV for the 35-member panel was 16.7%, which was slightly lower than that for the 25-member panel (19.8%). Our results suggested that IL inter-panel variability exists, but is relatively small. The variability may be affected by passing level and passing rate. Facial dimension-based fit test panel stratification was also found to have significant impact on inter-panel variability, i.e., it can reduce alpha
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
AGNfitter: An MCMC Approach to Fitting SEDs of AGN and galaxies
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph; Hogg, David W.
2016-08-01
I will present AGNfitter: a tool to robustly disentangle the physical processes responsible for the emission of active galactic nuclei (AGN). AGNfitter is the first open-source algorithm based on a Markov Chain Monte Carlo method to fit the spectral energy distributions of AGN from the sub-mm to the UV. The code makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the host galaxy and the nuclear emission simultaneously. The model consists in four physical components comprising stellar populations, cold dust distributions in star forming regions, accretion disk, and hot dust torus emissions. AGNfitter is well suited to infer numerous parameters that rule the physics of AGN with a proper handling of their confidence levels through the sampling and assumptions-free calculation of their posterior probability distributions. The resulting parameters are, among many others, accretion disk luminosities, dust attenuation for both galaxy and accretion disk, stellar masses and star formation rates. We describe the relevance of this fitting machinery, the technicalities of the code, and show its capabilities in the context of unobscured and obscured AGN. The analyzed data comprehend a sample of 714 X-ray selected AGN of the XMM-COSMOS survey, spectroscopically classified as Type1 and Type2 sources by their optical emission lines. The inference of variate independent obscuration parameters allows AGNfitter to find a classification strategy with great agreement with the spectroscopical classification for ˜ 86% and ˜ 70% for the Type1 and Type2 AGNs respectively. The variety and large number of physical properties inferred by AGNfitter has the potential of contributing to a wide scope of science-cases related to both active and quiescent galaxies studies.
NASA Astrophysics Data System (ADS)
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-03-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach.
Lloyd, Graeme T
2012-02-23
Modelling has been underdeveloped with respect to constructing palaeobiodiversity curves, but it offers an additional tool for removing sampling from their estimation. Here, an alternative to subsampling approaches, which often require large sample sizes, is explored by the extension and refinement of a pre-existing modelling technique that uses a geological proxy for sampling. Application of the model to the three main clades of dinosaurs suggests that much of their diversity fluctuations cannot be explained by sampling alone. Furthermore, there is new support for a long-term decline in their diversity leading up to the Cretaceous-Paleogene (K-Pg) extinction event. At present, use of this method with data that includes either Lagerstätten or 'Pull of the Recent' biases is inappropriate, although partial solutions are offered.
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-01-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach. PMID:28338056
Geo-fit Approach to the Analysis of Limb-Scanning Satellite Measurements.
Carlotti, M; Dinelli, B M; Raspollini, P; Ridolfi, M
2001-04-20
We propose a new approach to the analysis of limb-scanning measurements of the atmosphere that are continually recorded from an orbiting platform. The retrieval is based on the simultaneous analysis of observations taken along the whole orbit. This approach accounts for the horizontal variability of the atmosphere, hence avoiding the errors caused by the assumption of horizontal homogeneity along the line of sight of the observations. A computer program that implements the proposed approach has been designed; its performance is shown with a simulated retrieval analysis based on a satellite experiment planned to fly during 2001. This program has also been used for determining the size and the character of the errors that are associated with the assumption of horizontal homogeneity. A computational strategy that reduces the large number of computer resources apparently demanded by the proposed inversion algorithm is described.
Reconstruction of Galaxy Star Formation Histories through SED Fitting: The Dense Basis Approach
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric J.
2017-01-01
The standard assumption of a simplified parametric form for galaxy Star Formation Histories (SFHs) during Spectral Energy Distribution (SED) fitting biases estimations of physical quantities (Stellar Mass, SFR, age) and underestimates their true uncertainties. Here, we describe the Dense Basis formalism, which uses an atlas of well-motivated basis SFHs to provide robust reconstructions of galaxy SFHs and provides estimates of previously inaccessible quantities like the number of star formation episodes in a galaxy's past. We train and validate the method using a sample of realistic SFHs at z=1 drawn from current Semi Analytic Models and Hydrodynamical simulations, as well as SFHs generated using a stochastic prescription. We then apply the method on ~1100 CANDELS galaxies at 1
Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach
Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.
2014-01-21
The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq{sub 3} based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.
Imprecision Medicine: A One-Size-Fits-Many Approach for Muscle Dystrophy.
Breitbart, Astrid; Murry, Charles E
2016-04-07
There is still no curative treatment for Duchenne muscular dystrophy (DMD). In this issue of Cell Stem Cell, Young et al. (2016) demonstrate a genome editing approach applicable to 60% of DMD patients with CRISPR/Cas9 using one pair of guide RNAs.
NASA Astrophysics Data System (ADS)
Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray
2007-01-01
Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic "two-state" experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin
2004-01-01
Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…
Differences in Adolescent Physical Fitness: A Multivariate Approach and Meta-analysis.
Schutte, Nienke M; Nederend, Ineke; Hudziak, James J; de Geus, Eco J C; Bartels, Meike
2016-03-01
Physical fitness can be defined as a set of components that determine exercise ability and influence performance in sports. This study investigates the genetic and environmental influences on individual differences in explosive leg strength (vertical jump), handgrip strength, balance, and flexibility (sit-and-reach) in 227 healthy monozygotic and dizygotic twin pairs and 38 of their singleton siblings (mean age 17.2 ± 1.2). Heritability estimates were 49% (95% CI 35-60%) for vertical jump, 59% (95% CI 46-69%) for handgrip strength, 38% (95% CI 22-52%) for balance, and 77% (95% CI 69-83%) for flexibility. In addition, a meta-analysis was performed on all twin studies in children, adolescents and young adults reporting heritability estimates for these phenotypes. Fifteen studies, including results from our own study, were meta-analyzed by computing the weighted average heritability. This showed that genetic factors explained most of the variance in vertical jump (62%; 95% CI 47-77%, N = 874), handgrip strength (63%; 95% CI 47-73%, N = 4516) and flexibility (50%; 95% CI 38-61%, N = 1130) in children and young adults. For balance this was 35% (95% CI 19-51%, N = 978). Finally, multivariate modeling showed that the phenotypic correlations between the phenotypes in current study (0.07 < r < 0.27) were mostly driven by genetic factors. It is concluded that genetic factors contribute significantly to the variance in muscle strength, flexibility and balance; factors that may play a key role in the individual differences in adolescent exercise ability and sports performance.
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.
2016-10-01
Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.
Beyond one-size-fits-all: Tailoring diversity approaches to the representation of social groups.
Apfelbaum, Evan P; Stephens, Nicole M; Reagans, Ray E
2016-10-01
When and why do organizational diversity approaches that highlight the importance of social group differences (vs. equality) help stigmatized groups succeed? We theorize that social group members' numerical representation in an organization, compared with the majority group, influences concerns about their distinctiveness, and consequently, whether diversity approaches are effective. We combine laboratory and field methods to evaluate this theory in a professional setting, in which White women are moderately represented and Black individuals are represented in very small numbers. We expect that focusing on differences (vs. equality) will lead to greater performance and persistence among White women, yet less among Black individuals. First, we demonstrate that Black individuals report greater representation-based concerns than White women (Study 1). Next, we observe that tailoring diversity approaches to these concerns yields greater performance and persistence (Studies 2 and 3). We then manipulate social groups' perceived representation and find that highlighting differences (vs. equality) is more effective when groups' representation is moderate, but less effective when groups' representation is very low (Study 4). Finally, we content-code the diversity statements of 151 major U.S. law firms and find that firms that emphasize differences have lower attrition rates among White women, whereas firms that emphasize equality have lower attrition rates among racial minorities (Study 5). (PsycINFO Database Record
NASA Astrophysics Data System (ADS)
Staśkiewicz, B.; Okrasiński, W.
2012-04-01
We propose a simple analytical form of the vapor-liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described.
Jalali-Heravi, Mehdi; Parastar, Hadi
2011-08-15
Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed.
Hiltrop, Dennis; Masa, Justus; Botz, Alexander J R; Lindner, Armin; Schuhmann, Wolfgang; Muhler, Martin
2017-03-31
A spectroelectrochemical cell is presented that allows investigations of electrochemical reactions by means of attenuated total reflection infrared (ATR-IR) spectroscopy. The electrode holder for the working (WE), counter and reference electrode as mounted in the IR spectrometer cause the formation of a thin electrolyte layer between the internal reflection element (IRE) and the surface of the WE. The thickness of this thin electrolyte layer (dTL) was estimated by performing a scanning electrochemical microscopy-(SECM) like approach of a Pt microelectrode (ME), which was leveled with the WE toward the IRE surface. The precise lowering of the ME/WE plane toward the IRE was enabled by a micrometer screw. The approach curve was recorded in negative feedback mode of SECM and revealed the contact point of the ME and WE on the IRE, which was used as reference point to perform the electro-oxidation of ethanol over a drop-casted Pd/NCNT catalyst on the WE at different thin-layer thicknesses by cyclic voltammetry. The reaction products were detected in the liquid electrolyte by IR spectroscopy, and the effect of variations in dTL on the current densities and IR spectra were analyzed and discussed. The obtained data identify dTL as an important variable in thin-layer experiments with electrochemical reactions and FTIR readout.
Ecosystems Biology Approaches To Determine Key Fitness Traits of Soil Microorganisms
NASA Astrophysics Data System (ADS)
Brodie, E.; Zhalnina, K.; Karaoz, U.; Cho, H.; Nuccio, E. E.; Shi, S.; Lipton, M. S.; Zhou, J.; Pett-Ridge, J.; Northen, T.; Firestone, M.
2014-12-01
The application of theoretical approaches such as trait-based modeling represent powerful tools to explain and perhaps predict complex patterns in microbial distribution and function across environmental gradients in space and time. These models are mostly deterministic and where available are built upon a detailed understanding of microbial physiology and response to environmental factors. However as most soil microorganisms have not been cultivated, for the majority our understanding is limited to insights from environmental 'omic information. Information gleaned from 'omic studies of complex systems should be regarded as providing hypotheses, and these hypotheses should be tested under controlled laboratory conditions if they are to be propagated into deterministic models. In a semi-arid Mediterranean grassland system we are attempting to dissect microbial communities into functional guilds with defined physiological traits and are using a range of 'omics approaches to characterize their metabolic potential and niche preference. Initially, two physiologically relevant time points (peak plant activity and prior to wet-up) were sampled and metagenomes sequenced deeply (600-900 Gbp). Following assembly, differential coverage and nucleotide frequency binning were carried out to yield draft genomes. In addition, using a range of cultivation media we have isolated a broad range of bacteria representing abundant bacterial genotypes and with genome sequences of almost 40 isolates are testing genomic predictions regarding growth rate, temperature and substrate utilization in vitro. This presentation will discuss the opportunities and challenges in parameterizing microbial functional guilds from environmental 'omic information for use in trait-based models.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Gregg, Evan O.; Minet, Emmanuel
2013-01-01
There are established guidelines for bioanalytical assay validation and qualification of biomarkers. In this review, they were applied to a panel of urinary biomarkers of tobacco smoke exposure as part of a “fit for purpose” approach to the assessment of smoke constituents exposure in groups of tobacco product smokers. Clinical studies have allowed the identification of a group of tobacco exposure biomarkers demonstrating a good doseresponse relationship whilst others such as dihydroxybutyl mercapturic acid and 2-carboxy-1-methylethylmercapturic acid – did not reproducibly discriminate smokers and non-smokers. Furthermore, there are currently no agreed common reference standards to measure absolute concentrations and few inter-laboratory trials have been performed to establish consensus values for interim standards. Thus, we also discuss in this review additional requirements for the generation of robust data on urinary biomarkers, including toxicant metabolism and disposition, method validation and qualification for use in tobacco products comparison studies. PMID:23902266
Gregg, Evan O; Minet, Emmanuel; McEwan, Michael
2013-09-01
There are established guidelines for bioanalytical assay validation and qualification of biomarkers. In this review, they were applied to a panel of urinary biomarkers of tobacco smoke exposure as part of a "fit for purpose" approach to the assessment of smoke constituents exposure in groups of tobacco product smokers. Clinical studies have allowed the identification of a group of tobacco exposure biomarkers demonstrating a good doseresponse relationship whilst others such as dihydroxybutyl mercapturic acid and 2-carboxy-1-methylethylmercapturic acid - did not reproducibly discriminate smokers and non-smokers. Furthermore, there are currently no agreed common reference standards to measure absolute concentrations and few inter-laboratory trials have been performed to establish consensus values for interim standards. Thus, we also discuss in this review additional requirements for the generation of robust data on urinary biomarkers, including toxicant metabolism and disposition, method validation and qualification for use in tobacco products comparison studies.
NASA Astrophysics Data System (ADS)
Nigmatullin, R.; Rakhmatullin, R.
2014-12-01
yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].
2014-01-01
Background The prevalence of obesity increased while certain measures of physical fitness deteriorated in preschool children in China over the past decade. This study tested the effectiveness of a multifaceted intervention that integrated childcare center, families, and community to promote healthy growth and physical fitness in preschool Chinese children. Methods This 12-month study was conducted using a quasi-experimental pretest/posttest design with comparison group. The participants were 357 children (mean age = 4.5 year) enrolled in three grade levels in two childcare centers in Beijing, China. The intervention included: 1) childcare center intervention (physical activity policy changes, teacher training, physical education curriculum and food services training), 2) family intervention (parent education, internet website for support, and family events), and 3) community intervention (playground renovation and community health promotion events). The study outcome measures included body composition (percent body fat, fat mass, and muscle mass), Body Mass Index (BMI) and BMI z-score and physical fitness scores in 20-meter agility run (20M-AR), broad jump for distance (BJ), timed 10-jumps, tennis ball throwing (TBT), sit and reach (SR), balance beam walk (BBW), 20-meter crawl (20M-C)), 30-meter sprint (30M-S)) from a norm referenced test. Measures of process evaluation included monitoring of children’s physical activity (activity time and intensity) and food preparation records, and fidelity of intervention protocol implementation. Results Children in the intervention center significantly lowered their body fat percent (−1.2%, p < 0.0001), fat mass (−0.55 kg, p <0.0001), and body weight (0.36 kg, p <0.02) and increased muscle mass (0.48 kg, p <0.0001), compared to children in the control center. They also improved all measures of physical fitness except timed 10-jumps (20M-AR: −0.74 seconds, p < 0.0001; BJ: 8.09 cm, p < 0.0001; TBT: 0
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Sinus floor elevation with a crestal approach using a press-fit bone block: a case series.
Isidori, M; Genty, C; David-Tchouda, S; Fortin, T
2015-09-01
This prospective study aimed to provide detailed clinical information on a sinus augmentation procedure, i.e., transcrestal sinus floor elevation with a bone block using the press-fit technique. A bone block is harvested with a trephine burr to obtain a cylinder. This block is inserted into the antrum via a crestal approach after creation of a circular crestal window. Thirty-three patients were treated with a fixed prosthesis supported by implants placed on 70 cylindrical bone blocks. The mean bone augmentation was 6.08±2.87 mm, ranging from 0 to 12.7 mm. Only one graft failed before implant placement. During surgery and the subsequent observation period, no complications were recorded, one implant was lost, and no infection or inflammation was observed. This proof-of-concept study suggests that the use of a bone block inserted into the sinus cavity via a crestal approach can be an alternative to the sinus lift procedure with the creation of a lateral window. It reduces the duration of surgery, cost of treatment, and overall discomfort.
Kohli, Nidhi; Hughes, John; Wang, Chun; Zopluoglu, Cengiz; Davison, Mark L
2015-06-01
A linear-linear piecewise growth mixture model (PGMM) is appropriate for analyzing segmented (disjointed) change in individual behavior over time, where the data come from a mixture of 2 or more latent classes, and the underlying growth trajectories in the different segments of the developmental process within each latent class are linear. A PGMM allows the knot (change point), the time of transition from 1 phase (segment) to another, to be estimated (when it is not known a priori) along with the other model parameters. To assist researchers in deciding which estimation method is most advantageous for analyzing this kind of mixture data, the current research compares 2 popular approaches to inference for PGMMs: maximum likelihood (ML) via an expectation-maximization (EM) algorithm, and Markov chain Monte Carlo (MCMC) for Bayesian inference. Monte Carlo simulations were carried out to investigate and compare the ability of the 2 approaches to recover the true parameters in linear-linear PGMMs with unknown knots. The results show that MCMC for Bayesian inference outperformed ML via EM in nearly every simulation scenario. Real data examples are also presented, and the corresponding computer codes for model fitting are provided in the Appendix to aid practitioners who wish to apply this class of models.
Iozzi, Fabrizio; Trusiano, Francesco; Chinazzi, Matteo; Billari, Francesco C; Zagheni, Emilio; Merler, Stefano; Ajelli, Marco; Del Fava, Emanuele; Manfredi, Piero
2010-12-02
Knowledge of social contact patterns still represents the most critical step for understanding the spread of directly transmitted infections. Data on social contact patterns are, however, expensive to obtain. A major issue is then whether the simulation of synthetic societies might be helpful to reliably reconstruct such data. In this paper, we compute a variety of synthetic age-specific contact matrices through simulation of a simple individual-based model (IBM). The model is informed by Italian Time Use data and routine socio-demographic data (e.g., school and workplace attendance, household structure, etc.). The model is named "Little Italy" because each artificial agent is a clone of a real person. In other words, each agent's daily diary is the one observed in a corresponding real individual sampled in the Italian Time Use Survey. We also generated contact matrices from the socio-demographic model underlying the Italian IBM for pandemic prediction. These synthetic matrices are then validated against recently collected Italian serological data for Varicella (VZV) and ParvoVirus (B19). Their performance in fitting sero-profiles are compared with other matrices available for Italy, such as the Polymod matrix. Synthetic matrices show the same qualitative features of the ones estimated from sample surveys: for example, strong assortativeness and the presence of super- and sub-diagonal stripes related to contacts between parents and children. Once validated against serological data, Little Italy matrices fit worse than the Polymod one for VZV, but better than concurrent matrices for B19. This is the first occasion where synthetic contact matrices are systematically compared with real ones, and validated against epidemiological data. The results suggest that simple, carefully designed, synthetic matrices can provide a fruitful complementary approach to questionnaire-based matrices. The paper also supports the idea that, depending on the transmissibility level of the
Morawe, Ch; Guigay, J-P; Mocella, V; Ferrero, C
2008-09-29
Aberration effects are studied in parabolic and elliptic multilayer mirrors for hard x-rays, basing on a simple analytical approach. The interpretation of the underlying equations provides insight into fundamental limitations of the focusing properties of curved multilayers. Using realistic values for the multilayer parameters the potential impact on the broadening of the focal spot is evaluated. Within the limits of this model, systematic contributions to the spot size can be described. The work is complemented by a comparison with experimental results obtained with a W/B(4)C curved multilayer mirror.
NASA Astrophysics Data System (ADS)
Wassmann, A.; Borsdorff, T.; aan de Brugh, J. M. J.; Hasekamp, O. P.; Aben, I.; Landgraf, J.
2015-10-01
We present a sensitivity study of the direct fitting approach to retrieve total ozone columns from the clear sky Global Ozone Monitoring Experiment 2/MetOp-A (GOME-2/MetOp-A) measurements between 325 and 335 nm in the period 2007-2010. The direct fitting of the measurement is based on adjusting the scaling of a reference ozone profile and requires accurate simulation of GOME-2 radiances. In this context, we study the effect of three aspects that introduce forward model errors if not addressed appropriately: (1) the use of a clear sky model atmosphere in the radiative transfer demanding cloud filtering, (2) different approximations of Earth's sphericity to address the influence of the solar zenith angle, and (3) the need of polarization in radiative transfer modeling. We conclude that cloud filtering using the operational GOME-2 FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A band) cloud product, which is part of level 1B data, and the use of pseudo-spherical scalar radiative transfer is fully sufficient for the purpose of this retrieval. A validation with ground-based measurements at 36 stations confirms this showing a global mean bias of -0.1 % with a standard deviation (SD) of 2.7 %. The regularization effect inherent to the profile scaling approach is thoroughly characterized by the total column averaging kernel for each individual retrieval. It characterizes the effect of the particular choice of the ozone profile to be scaled by the inversion and is part of the retrieval product. Two different interpretations of the data product are possible: first, regarding the retrieval product as an estimate of the true column, a direct comparison of the retrieved column with total ozone columns from ground-based measurements can be done. This requires accurate a priori knowledge of the reference ozone profile and the column averaging kernel is not needed. Alternatively, the retrieval product can be interpreted as an effective column defined by the total column
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
ERIC Educational Resources Information Center
Wimmers, Paul F.; Lee, Ming
2015-01-01
To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze…
Probing exoplanet clouds with optical phase curves.
Muñoz, Antonio García; Isaak, Kate G
2015-11-03
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
NASA Astrophysics Data System (ADS)
Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael
2017-03-01
Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.
Lien, Laura L.; Steggell, Carmen D.; Iwarsson, Susanne
2015-01-01
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants’ perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well. PMID:26404352
Lien, Laura L; Steggell, Carmen D; Iwarsson, Susanne
2015-09-23
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants' perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well.
Jaime-Pérez, José C; Monreal-Robles, Roberto; Rodríguez-Romo, Laura N; Mancías-Guerra, Consuelo; Herrera-Garza, José Luís; Gómez-Almaguer, David
2011-11-01
The objective of the study was to evaluate the current standard practice of using volume and total nucleated cell (TNC) count for the selection of cord blood (CB) units for cryopreservation and further transplantation. Data on 794 CB units whose CD34+ cell content was determined by flow cytometry were analyzed by using a receiver operating characteristic (ROC) curve model to validate the performance of volume and TNC count for the selection of CB units with grafting purposes. The TNC count was the best parameter to identify CB units having 2 × 10(6) or more CD34+ cells, with an area under the ROC curve of 0.828 (95% confidence interval, 0.800-0.856; P < .01) and an efficiency of 75.4%. Combination of parameters (TNC/mononuclear cells [MNCs], efficiency 74.7%; TNC/volume, efficiency 68.9%; and volume/MNCs, efficiency 68.3%) did not lead to improvement in CB selection. All CB units having a TNC count of 8 × 10(8) or more had the required CD34+ cell dose for patients weighing 10 kg or less.
Probing exoplanet clouds with optical phase curves
Muñoz, Antonio García; Isaak, Kate G.
2015-01-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5. PMID:26489652
Fitting Surge Functions to Data
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2006-01-01
The problem of fitting a surge function to a set of data such as that for a drug response curve is considered. A variety of different techniques are applied, including using some fundamental ideas from calculus, the use of a CAS package, and the use of Excel's regression features for fitting a multivariate linear function to a set of transformed…
ERIC Educational Resources Information Center
Phelps, Joshua; Smith, Amanda; Parker, Stephany; Hermann, Janice
2016-01-01
Oklahoma Cooperative Extension Service provided elementary school students with a program that included a noncompetitive physical activity component: circuit training that combined cardiovascular, strength, and flexibility activities without requiring high skill levels. The intent was to improve fitness without focusing on body mass index as an…
A novel approach to fit testing the N95 respirator in real time in a clinical setting.
Or, Peggy; Chung, Joanne; Wong, Thomas
2016-02-01
The instant measurements provided by the Portacount fit-test instrument have been used as the gold standard in predicting the protection of an N95 respirator in a laboratory environment. The conventional Portacount fit-test method, however, cannot deliver real-time measurements of face-seal leakage when the N95 respirator is in use in clinical settings. This research was divided into two stages. Stage 1 involved developing and validating a new quantitative fit-test method called the Personal Respiratory Sampling Test (PRST). In Stage 2, PRST was evaluated in use during nursing activities in clinical settings. Eighty-four participants were divided randomly into four groups and were tested while performing bedside nursing procedures. In Stage 1, a new PRST method was successfully devised and validated. Results of Stage 2 showed that the new PRST method could detect different concentrations and different particle sizes inside the respirator while the wearer performed different nursing activities. This new fit-test method, PRST, can detect face seal leakage of an N95 respirator being worn while the wearer performs clinical activities. Thus, PRST can help ensure that the N95 respirator actually fulfils its function of protecting health-care workers from airborne pathogens.
Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip
2016-10-01
This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age.
Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C
2012-10-16
A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.
NASA Astrophysics Data System (ADS)
Dobberschütz, Sören; Böhm, Michael
2010-02-01
The behaviour of a free fluid flow above a porous medium, both separated by a curved interface, is investigated. By carrying out a coordinate transformation, we obtain the description of the flow in a domain with a straight interface. Using periodic homogenisation, the effective behaviour of the transformed partial differential equations in the porous part is given by a Darcy law with non-constant permeability matrix. Then the fluid behaviour at the porous-liquid interface is obtained with the help of generalised boundary-layer functions: Whereas the velocity in normal direction is continuous across the interface, a jump appears in tangential direction. Its magnitude seems to be related to the slope of the interface. Therefore the results indicate a generalised law of Beavers and Joseph.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
NASA Astrophysics Data System (ADS)
Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.
2009-04-01
The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.
Curis, Emmanuel; Bénazeth, Simone
2005-05-01
An important step in X-ray absorption spectroscopy (XAS) analysis is the fitting of a model to the experimental spectra, with a view to obtaining structural parameters. It is important to estimate the errors on these parameters, and three methods are used for this purpose. This article presents the conditions for applying these methods. It is shown that the usual equation Sigma = 2H(-1) is not applicable for fitting in R space or on filtered XAS data; a formula is established to treat these cases, and the equivalence between the usual formula and the brute-force method is evidenced. Lastly, the problem of the nonlinearity of the XAS models and a comparison with Monte Carlo methods are addressed.
Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K
1995-10-01
An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages.
NASA Astrophysics Data System (ADS)
Cockburn, Bernardo; Kao, Chiu-Yen; Reitich, Fernando
2014-02-01
We present an adaptive spectral/discontinuous Galerkin (DG) method on curved elements to simulate high-frequency wavefronts within a reduced phase-space formulation of geometrical optics. Following recent work, the approach is based on the use of level sets defined by functions satisfying the Liouville equations in reduced phase-space and, in particular, it relies on the smoothness of these functions to represent them by rapidly convergent spectral expansions in the phase variables. The resulting (hyperbolic) system of equations for the coefficients in these expansions are then amenable to a high-order accurate treatment via DG approximations. In the present work, we significantly expand on the applicability and efficiency of the approach by incorporating mechanisms that allow for its use in scattering simulations and for a reduced overall computational cost. With regards to the former we demonstrate that the incorporation of curved elements is necessary to attain any kind of accuracy in calculations that involve scattering off non-flat interfaces. With regards to efficiency, on the other hand, we also show that the level-set formulation allows for a space p-adaptive scheme that under-resolves the level-set functions away from the wavefront without incurring in a loss of accuracy in the approximation of its location. As we show, these improvements enable simulations that are beyond the capabilities of previous implementations of these numerical procedures.
Vanderborght, Jan; Vereecken, Harry
2002-01-01
The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.
Xu, Guangjian; Zhong, Xiaoxiao; Wang, Yangfan; Warren, Alan; Xu, Henglong
2014-12-01
The functional parameters, i.e., the estimated equilibrium species number (S eq), the colonization rate constant, and the time taken to reach 90 % of S eq (T 90), of microperiphyton fauna have been widely used to determine the water quality status in aquatic ecosystems. The objective of this investigation was to develop a protocol for determining functional parameters of microperiphyton fauna in colonization surveys for marine bioassessment based on rarefaction and regression analyses. The temporal dynamics in species richness of microperiphyton fauna during the colonization period was analyzed based on a dataset of periphytic ciliates in Chinese coastal waters of the Yellow Sea. The results showed that (1) based on observed species richness and estimated maximum species numbers, a total of 16 glass slides were required in order to achieve coefficients of variation of <5 % in the functional parameters; (2) the rarefied average species richness and functional parameters showed weak sensitivity to sampling effort; (3) the temporal variations in average species richness were well-fitted to the MacArthur-Wilson model; and (4) the sampling effort of ~8 glass slides was sufficient to achieve coefficients of variation of <5 % in equilibrium average species number (AvS eq), colonization rate (AvG), and the time to reach 90 % of AvS eq (AvT 90) based on the average species richness. The findings suggest that the AvS eq, AvG, and AvT 90 values based on rarefied average species richness of microperiphyton might be used as reliable ecological indicators for the bioassessment of marine water quality in coastal habitats.
Krishna Kumar, P; Araki, Tadashi; Rajan, Jeny; Saba, Luca; Lavra, Francesco; Ikeda, Nobutaka; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Gupta, Ajay; Suri, Jasjit S
2016-12-10
Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.
Guigay, J-P; Morawe, Ch; Mocella, V; Ferrero, C
2008-08-04
An analytical approach has been developed to derive aberration effects in parabolic and elliptic multilayer optics with weak interaction between photons and matter. The method is based on geometrical ray tracing including refraction effects up to the first order of the refractive index decrement delta. In the parabolic case, the derivation leads to simple parametric equations for the caustic shape. In the elliptic case, the analytical results more involved, but can be well approximated by the parabolic solution. Both geometries are compared with regard to the fundamental impact on their focusing properties.
NASA Technical Reports Server (NTRS)
Neuman, F.
1980-01-01
A method for determining fuel conservative terminal approaches that include changes in altitude, speed, and heading are described. Three different guidance system concepts for STOL aircraft were evaluated in flight: (1) a fixed trajectory system; (2) a system that included a fixed path and a real time synthesized capture flight path; and (3) a trajectory synthesizing system. Simulation results for the augmentor wing jet STOL research aircraft and for the Boeing 727 aircraft are discussed. The results indicate that for minimum fuel consumption, two guidance deceleration segments are required.
Intensity Conserving Spectral Fitting
NASA Astrophysics Data System (ADS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2016-01-01
The detailed shapes of spectral-line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. We have developed an iterative procedure that corrects for this effect. It converges rapidly and is very flexible in that it can be used with any fitting function. We present examples of cubic-spline and Gaussian fits and give special attention to measurements of blue-red asymmetries of coronal emission lines.
NASA Astrophysics Data System (ADS)
Tarana, Michal; Čurík, Roman
2016-05-01
We introduce a computational method developed for study of long-range molecular Rydberg states of such systems that can be approximated by two electrons in a model potential of the atomic cores. The method is based on a two-electron R-matrix approach inside a sphere centered on one of the atoms. The wave function is then connected to a Coulomb region outside the sphere via a multichannel version of the Coulomb Green's function. This approach is applied to a study of Rydberg states of Rb2 for internuclear separations R from 40 to 320 bohrs and energies corresponding to n from 7 to 30. We report bound states associated with the low-lying 3Po resonance and with the virtual state of the rubidium atom that turn into ion-pair-like bound states in the Coulomb potential of the atomic Rydberg core. The results are compared with previous calculations based on single-electron models employing a zero-range contact-potential and short-range modele potential. Czech Science Foundation (Project No. P208/14-15989P).
NASA Astrophysics Data System (ADS)
Kalnajs, Agris J.
One can obtain a fairly good understanding of the relation between axially symmetric mass distributions and the rotation curves they produce without resorting to calculations. However it does require a break with tradition. The first step consists of replacing quantities such as surface density, volume density, and circular velocity with the mass in a ring, mass in a spherical shell, and the square of the circular velocity, or more precisely with 2 pi G r mu(r), 4 pi G r^2 rho(r), and Vc^2 (r). These three quantities all have the same dimensions, and are related to each other by scale-free linear operators. The second step consists of introducing ln(r) as the coordinate. On the log scale the scale-free operators becomes the more familiar convolution operations. Convolutions are easily handled by Fourier techniques and a surface density can be converted into a rotation curve or volume density in a small fraction of a second. A simple plot of 2 pi G r mu(r) as a function of ln(r) reveals the relative contributions of different radii to Vc^2(r). Such a plot also constitutes a sanity test for the fitting of various laws to photometric data. There are numerous examples in the literature of excellent fits to the tails that lack data or are poor fits around the maximum of 2 pi G r mu(r). I will discuss some exact relations between the above three quantities as well as some empirical observations such as the near equality of the maxima of 2 pi G r mu(r) and Vc^2 (r) curves for flat mass distributions.
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-01-01
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as “reliable” or “unreliable” based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance (1H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named “cluster-aided MCR-ALS,” will facilitate the attainment of more reliable results in the metabolomics datasets. PMID:26531245
NASA Astrophysics Data System (ADS)
González-Garcia, Javier; Jessell, Mark
2016-09-01
The Ruiz-Tolima Volcanic Massif (RTVM) is an active volcanic complex in the Northern Andes, and understanding its geological structure is critical for hazard mitigation and guiding future geothermal exploration. However, the sparsity of data available to constrain the interpretation of this volcanic system hinders the application of standard 3D modelling techniques. Furthermore, some features related to the volcanic system are not entirely understood, such as the connectivity between the plutons present in its basement (i.e. Manizales Stock, El Bosque Batholith). We have developed a methodology where two independent working hypotheses were formulated and modelled independently (i.e. a case where both plutons constitute distinct bodies, and an alternative case where they form one single batholith). A Monte Carlo approach was used to characterise the geological uncertainty in each case. Bézier curve design was used to represent geological contacts on input cross sections. Systematic variations in the control points of these curves allows us to generate multiple realisations of geological interfaces, resulting in stochastic models that were grouped into suites used to apply quantitative estimators of uncertainty. This process results in a geological representation based on fuzzy logic and in maps of model uncertainty distribution. The results are consistent with expected regions of high uncertainty near under-constrained geological contacts, while the non-unique nature of the conceptual model indicates that the dominant source of uncertainty in the area is the nature of the batholith structure.
Wang, Liqun; Kranz, Christine; Mizaikoff, Boris
2010-01-01
Single-bounce attenuated total reflection infrared spectroscopy in the 3–20 µm range (MIR) has been combined with scanning electrochemical microscopy (SECM) for in situ spectroscopic detection of electrochemically induced localized surface modifications using an ultramicroelectrode (UME). In this study, a novel current-independent approach for positioning the UME in aqueous electrolyte solution is presented using either changes of IR absorption intensity associated with borosilicate glass (BSG), which is used as shielding material of the UME wire, or by monitoring IR changes of the water spectrum within the penetration depth of the evanescent field due to displacement of water molecules in the volume between the sample surface and the UME within the evanescent field. The experimental results show that the UME penetrates into the exponentially decaying evanescent field in close vicinity (a few µm) to the ATR crystal surface. Hence, the resulting intensity changes of the IR absorption spectra for borosilicate glass (increase) and for water (decrease), can be used to determine the position of the UME relative to the ATR crystal surface independent of the current measured at the UME. PMID:20329757
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+).
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liang, Chunlei
2015-08-01
This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.
Wu, Xin-bo; Fan, Guo-xin; Gu, Xin; Shen, Tu-gang; Guan, Xiao-fei; Hu, An-nan; Zhang, Hai-long; He, Shi-sheng
2016-01-01
Objectives: This study aimed to compare the learning curves of percutaneous endoscopic lumbar discectomy (PELD) in a transforaminal approach at the L4/5 and L5/S1 levels. Methods: We retrospectively reviewed the first 60 cases at the L4/5 level (Group I) and the first 60 cases at the L5/S1 level (Group II) of PELD performed by one spine surgeon. The patients were divided into subgroups A, B, and C (Group I: A cases 1–20, B cases 21–40, C cases 41–60; Group II: A cases 1–20, B cases 21–40, C cases 41–60). Operation time was thoroughly analyzed. Results: Compared with the L4/5 level, the learning curve of transforaminal PELD at the L5/S1 level was flatter. The mean operation times of Groups IA, IB, and IC were (88.75±17.02), (67.75±6.16), and (64.85±7.82) min, respectively. There was a significant difference between Groups A and B (P<0.05), but no significant difference between Groups B and C (P=0.20). The mean operation times of Groups IIA, IIB, and IIC were (117.25±13.62), (109.50±11.20), and (92.15±11.94) min, respectively. There was no significant difference between Groups A and B (P=0.06), but there was a significant difference between Groups B and C (P<0.05). There were 6 cases of postoperative dysesthesia (POD) in Group I and 2 cases in Group IIA (P=0.27). There were 2 cases of residual disc in Group I, and 4 cases in Group II (P=0.67). There were 3 cases of recurrence in Group I, and 2 cases in Group II (P>0.05). Conclusions: Compared with the L5/S1 level, the learning curve of PELD in a transforaminal approach at the L4/5 level was steeper, suggesting that the L4/5 level might be easier to master after short-term professional training. PMID:27381732
Langevin Equation on Fractal Curves
NASA Astrophysics Data System (ADS)
Satin, Seema; Gangal, A. D.
2016-07-01
We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.
Christensen, S. W.; Goodyear, C. P.; Kirk, B. L.
1982-03-01
This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks.
Adsorption of nitrophenol onto activated carbon: isotherms and breakthrough curves.
Chern, Jia-Ming; Chien, Yi-Wen
2002-02-01
The adsorption isotherm of p-nitrophenol onto granular activated carbon in 25 degrees C aqueous solution was experimentally determined by batch tests. Both the Freundlich and the Redlich-Peterson models were found to fit the adsorption isotherm data well. A series of column tests were performed to determine the breakthrough curves with varying bed depths (3-6 cm) and water flow rates (21.6-86.4 cm3/h). Explicit equations for the breakthrough curves of the fixed-bed adsorption processes with the Langmuir and the Freundlich adsorption isotherms were developed by the constant-pattern wave approach using a constant driving force model in the liquid phase. The results show that the half breakthrough time increases proportionally with increasing bed depth but decreases inverse proportionally with increasing water flow rate. The constant-pattern wave approach using the Freundlich isotherm model fits the experimental breakthrough curves quite satisfactorily. A correlation was proposed to predict the volumetric mass-transfer coefficient in the liquid phase successfully. The effects of solution temperature and pH on the adsorption isotherm were also studied and the Tóth model was found to fit the isotherm data well at varying solution temperatures and pHs.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Edge detection and mathematic fitting for corneal surface with Matlab software
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
AIM To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. METHODS Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax2+Bx+C), Polynomial curve [p(x)=p1xn+p2xn−1 +....+pnx+pn+1] and Conic section (Ax2+Bxy+Cy2+Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t-test. RESULTS Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc. There were no significant differences between ‘e’ values by corneal topography and conic section (t=0.9143, P=0.3760 >0.05). CONCLUSION It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties. PMID:28393021
Gilkey, Roderick; Kilts, Clint
2007-11-01
Recent neuroscientific research shows that the health of your brain isn't, as experts once thought, just the product of childhood experiences and genetics; it reflects your adult choices and experiences as well. Professors Gilkey and Kilts of Emory University's medical and business schools explain how you can strengthen your brain's anatomy, neural networks, and cognitive abilities, and prevent functions such as memory from deteriorating as you age. The brain's alertness is the result of what the authors call cognitive fitness -a state of optimized ability to reason, remember, learn, plan, and adapt. Certain attitudes, lifestyle choices, and exercises enhance cognitive fitness. Mental workouts are the key. Brain-imaging studies indicate that acquiring expertise in areas as diverse as playing a cello, juggling, speaking a foreign language, and driving a taxicab expands your neural systems and makes them more communicative. In other words, you can alter the physical makeup of your brain by learning new skills. The more cognitively fit you are, the better equipped you are to make decisions, solve problems, and deal with stress and change. Cognitive fitness will help you be more open to new ideas and alternative perspectives. It will give you the capacity to change your behavior and realize your goals. You can delay senescence for years and even enjoy a second career. Drawing from the rapidly expanding body of neuroscience research as well as from well-established research in psychology and other mental health fields, the authors have identified four steps you can take to become cognitively fit: understand how experience makes the brain grow, work hard at play, search for patterns, and seek novelty and innovation. Together these steps capture some of the key opportunities for maintaining an engaged, creative brain.
Jack, B Kelsey; Leimona, Beria; Ferraro, Paul J
2009-04-01
To supply ecosystem services, private landholders incur costs. Knowledge of these costs is critical for the design of conservation-payment programs. Estimating these costs accurately is difficult because the minimum acceptable payment to a potential supplier is private information. We describe how an auction of payment contracts can be designed to elicit this information during the design phase of a conservation-payment program. With an estimate of the ecosystem-service supply curve from a pilot auction, conservation planners can explore the financial, ecological, and socioeconomic consequences of alternative scaled-up programs. We demonstrate the potential of our approach in Indonesia, where soil erosion on coffee farms generates downstream ecological and economic costs. Bid data from a small-scale, uniform-price auction for soil-conservation contracts allowed estimates of the costs of a scaled-up program, the gain from integrating biophysical and economic data to target contracts, and the trade-offs between poverty alleviation and supply of ecosystem services. Our study illustrates an auction-based approach to revealing private information about the costs of supplying ecosystem services. Such information can improve the design of programs devised to protect and enhance ecosystem services.
Lessons from Darwin: Breeding the Best-fit Binary Star
NASA Astrophysics Data System (ADS)
Metcalfe, T. S.
1998-12-01
I have developed a procedure utilizing a Genetic-Algorithm-based optimization scheme to fit the observed light curves of an eclipsing binary star with a model produced by the Wilson-Devinney code. The principal advantages of this approach are the objectivity and the uniqueness of the final result. Although this method is more efficient than other comparably global search techniques, the computational requirements of the code are still considerable. I have applied this fitting procedure to my observations of the W UMa type eclipsing binary BH Cassiopeiae. An analysis of V--band CCD data obtained in 1994/95 from Steward Observatory and U-- and B--band photoelectric data obtained in 1996 from McDonald Observatory provided three complete light curves to constrain the fit. In addition, radial velocity curves obtained in 1997 from McDonald Observatory provided a direct measurement of the system mass ratio to restrict the search. The results of the GA-based fit are in excellent agreement with the final orbital solution obtained with the standard differential corrections procedure in the Wilson-Devinney code.
Ho, Shirley S; Lee, Edmund W J; Ng, Kaijie; Leong, Grace S H; Tham, Tiffany H M
2016-09-01
Based on the influence of presumed media influence (IPMI) model as the theoretical framework, this study examines how injunctive norms and personal norms mediate the influence of healthy lifestyle media messages on public intentions to engage in two types of healthy lifestyle behaviors-physical activity and healthy diet. Nationally representative data collected from 1,055 adults in Singapore demonstrate partial support for the key hypotheses that make up the extended IPMI model, highlighting the importance of a norms-based approach in health communication. Our results indicate that perceived media influence on others indirectly shaped public intentions to engage in healthy lifestyle behaviors through personal norms and attitude, providing partial theoretical support for the extended IPMI model. Practical implications for health communicators in designing health campaigns media messages to motivate the public to engage in healthy lifestyle are discussed.
Schulz, Douglas A.
2007-10-08
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classied and used to develop classication histograms, which are representative of an individual's typical mouse use. These classication histograms can then be compared to validate identity. This classication approach is suitable for providing continuous identity validation during an entire user session.
Invasion fitness, inclusive fitness, and reproductive numbers in heterogeneous populations.
Lehmann, Laurent; Mullon, Charles; Akçay, Erol; Van Cleve, Jeremy
2016-08-01
How should fitness be measured to determine which phenotype or "strategy" is uninvadable when evolution occurs in a group-structured population subject to local demographic and environmental heterogeneity? Several fitness measures, such as basic reproductive number, lifetime dispersal success of a local lineage, or inclusive fitness have been proposed to address this question, but the relationships between them and their generality remains unclear. Here, we ascertain uninvadability (all mutant strategies always go extinct) in terms of the asymptotic per capita number of mutant copies produced by a mutant lineage arising as a single copy in a resident population ("invasion fitness"). We show that from invasion fitness uninvadability is equivalently characterized by at least three conceptually distinct fitness measures: (i) lineage fitness, giving the average individual fitness of a randomly sampled mutant lineage member; (ii) inclusive fitness, giving a reproductive value weighted average of the direct fitness costs and relatedness weighted indirect fitness benefits accruing to a randomly sampled mutant lineage member; and (iii) basic reproductive number (and variations thereof) giving lifetime success of a lineage in a single group, and which is an invasion fitness proxy. Our analysis connects approaches that have been deemed different, generalizes the exact version of inclusive fitness to class-structured populations, and provides a biological interpretation of natural selection on a mutant allele under arbitrary strength of selection.
NASA Astrophysics Data System (ADS)
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
Serial position curves in free recall.
Laming, Donald
2010-01-01
The scenario for free recall set out in Laming (2009) is developed to provide models for the serial position curves from 5 selected sets of data, for final free recall, and for multitrial free recall. The 5 sets of data reflect the effects of rate of presentation, length of list, delay of recall, and suppression of rehearsal. Each model accommodates the serial position curve for first recalls (where those data are available) as well as that for total recalls. Both curves are fit with the same parameter values, as also (with 1 exception) are all of the conditions compared within each experiment. The distributions of numbers of recalls are also examined and shown to have variances increased above what would be expected if successive recalls were independent. This is taken to signify that, in those experiments in which rehearsals were not recorded, the retrieval of words for possible recall follows the same pattern that is observed following overt rehearsal, namely, that retrieval consists of runs of consecutive elements from memory. Finally, 2 sets of data are examined that the present approach cannot accommodate. It is argued that the problem with these data derives from an interaction between the patterns of (covert) rehearsal and the parameters of list presentation.
Joeres, J; Seuser, A; Kurme, A; Trunz-Carlisi, E; Ochs, S; Böhm, P
2012-01-01
Inclusive paedagogic thinking and acting is a modern and increasingly important topic in school sports. It will affect teachers as well as parents and students. The new international guidelines and national curricula enable new ways of inclusion especially for students with chronic illnesses like haemophilia. Special help from the sport teachers is of vital importance. In our project "fit for life" where we advice children and young adults with haemophilia to find their appropriate sport, we developed a new approach for an optimised inclusion of children with haemophilia into sport lessons. The whole project is running in corporation with the German Sport Teachers Association/Hessen. We analysed and rated the actual curricula of the different school years and looked at the specific needs, risks and necessary abilities for persons with haemophilia. By this means we gathered about 600 typical movements and/or exercises for school sports and developed individual advice and adapted exercise solutions for sport lessons.
Kim, Youngdeok; Lee, Jung-Min; Kim, Jungyoon; Dhurandhar, Emily; Soliman, Ghada; Wehbi, Nizar K.; Canedy, James
2017-01-01
Background Physical activity (PA) and healthy dietary behaviors (HDB) are two well-documented lifestyle factors influencing body mass index (BMI). This study examined 7-year longitudinal associations between changes in PA, HDB, and BMI among adults using a parallel latent growth curve modeling (LGCM). Methods We used prospective cohort data collected by a private company (SimplyWell LLC, Omaha, NE, USA) implementing a workplace health screening program. Data from a total of 2,579 adults who provided valid BMI, PA, and HDB information for at least 5 out of 7 follow-up years from the time they entered the program were analyzed. PA and HDB were subjectively measured during an annual online health survey. Height and weight measured during an annual onsite health screening were used to calculate BMI (kg·m2). The parallel LGCMs stratified by gender and baseline weight status (normal: BMI<25, overweight BMI 25–29.9, and obese: BMI>30) were fitted to examine the longitudinal associations of changes in PA and HDB with change in BMI over years. Results On average, BMI gradually increased over years, at rates ranging from 0.06 to 0.20 kg·m2·year, with larger increases observed among those of normal baseline weight status across genders. The increases in PA and HDB were independently associated with a smaller increase in BMI for obese males (b = -1.70 and -1.98, respectively), and overweight females (b = -1.85 and -2.46, respectively) and obese females (b = -2.78 and -3.08, respectively). However, no significant associations of baseline PA and HDB with changes in BMI were observed. Conclusions Our study suggests that gradual increases in PA and HDB are independently associated with smaller increases in BMI in overweight and obese adults, but not in normal weight individuals. Further study is warranted to address factors that check increases in BMI in normal weight adults. PMID:28296945
Demetrius, Lloyd; Ziehe, Martin
2007-11-01
The term Darwinian fitness refers to the capacity of a variant type to invade and displace the resident population in competition for available resources. Classical models of this dynamical process claim that competitive outcome is a deterministic event which is regulated by the population growth rate, called the Malthusian parameter. Recent analytic studies of the dynamics of competition in terms of diffusion processes show that growth rate predicts invasion success only in populations of infinite size. In populations of finite size, competitive outcome is a stochastic process--contingent on resource constraints--which is determined by the rate at which a population returns to its steady state condition after a random perturbation in the individual birth and death rates. This return rate, a measure of robustness or population stability, is analytically characterized by the demographic parameter, evolutionary entropy, a measure of the uncertainty in the age of the mother of a randomly chosen newborn. This article appeals to computational and numerical methods to contrast the predictive power of the Malthusian and the entropic principles. The computational analysis rejects the Malthusian model and is consistent with of the entropic principle. These studies thus provide support for the general claim that entropy is the appropriate measure of Darwinian fitness and constitutes an evolutionary parameter with broad predictive and explanatory powers.
Inclusive fitness in agriculture
Kiers, E. Toby; Denison, R. Ford
2014-01-01
Trade-offs between individual fitness and the collective performance of crop and below-ground symbiont communities are common in agriculture. Plant competitiveness for light and soil resources is key to individual fitness, but higher investments in stems and roots by a plant community to compete for those resources ultimately reduce crop yields. Similarly, rhizobia and mycorrhizal fungi may increase their individual fitness by diverting resources to their own reproduction, even if they could have benefited collectively by providing their shared crop host with more nitrogen and phosphorus, respectively. Past selection for inclusive fitness (benefits to others, weighted by their relatedness) is unlikely to have favoured community performance over individual fitness. The limited evidence for kin recognition in plants and microbes changes this conclusion only slightly. We therefore argue that there is still ample opportunity for human-imposed selection to improve cooperation among crop plants and their symbionts so that they use limited resources more efficiently. This evolutionarily informed approach will require a better understanding of how interactions among crops, and interactions with their symbionts, affected their inclusive fitness in the past and what that implies for current interactions. PMID:24686938
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Deming's General Least Square Fitting
Rinard, Phillip
1992-02-18
DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.
Error Modeling and Confidence Interval Estimation for Inductively Coupled Plasma Calibration Curves.
1987-02-01
confidence interval estimation for multiple use of the calibration curve is...calculate weights for the calibration curve fit. Multiple and single-use confidence interval estimates are obtained and results along the calibration curve are
2014-01-01
Background Recommendations for secondary hyperparathyroidism (SHPT) consider that a “one-size-fits-all” target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes. Results 305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ “intermediate” phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (>2), suggesting a relevant classification. Conclusion In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A “one-size-fits-all” target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes. PMID:25123022
Healthy Lifestyle Fitness Ready to start a fitness program? Measure your fitness level with a few simple tests. ... 14, 2017 Original article: http://www.mayoclinic.org/healthy-lifestyle/fitness/in-depth/fitness/art-20046433 . Mayo Clinic ...
Variability among polysulphone calibration curves
NASA Astrophysics Data System (ADS)
Casale, G. R.; Borra, M.; Colosimo, A.; Colucci, M.; Militello, A.; Siani, A. M.; Sisto, R.
2006-09-01
Within an epidemiological study regarding the correlation between skin pathologies and personal ultraviolet (UV) exposure due to solar radiation, 14 field campaigns using polysulphone (PS) dosemeters were carried out at three different Italian sites (urban, semi-rural and rural) in every season of the year. A polysulphone calibration curve for each field experiment was obtained by measuring the ambient UV dose under almost clear sky conditions and the corresponding change in the PS film absorbance, prior and post exposure. Ambient UV doses were measured by well-calibrated broad-band radiometers and by electronic dosemeters. The dose-response relation was represented by the typical best fit to a third-degree polynomial and it was parameterized by a coefficient multiplying a cubic polynomial function. It was observed that the fit curves differed from each other in the coefficient only. It was assessed that the multiplying coefficient was affected by the solar UV spectrum at the Earth's surface whilst the polynomial factor depended on the photoinduced reaction of the polysulphone film. The mismatch between the polysulphone spectral curve and the CIE erythemal action spectrum was responsible for the variability among polysulphone calibration curves. The variability of the coefficient was related to the total ozone amount and the solar zenith angle. A mathematical explanation of such a parameterization was also discussed.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
Carr, Steven A.; Abbatiello, Susan E.; Ackermann, Bradley L.; Borchers, Christoph; Domon, Bruno; Deutsch, Eric W.; Grant, Russell P.; Hoofnagle, Andrew N.; Hüttenhain, Ruth; Koomen, John M.; Liebler, Daniel C.; Liu, Tao; MacLean, Brendan; Mani, DR; Mansfield, Elizabeth; Neubert, Hendrik; Paulovich, Amanda G.; Reiter, Lukas; Vitek, Olga; Aebersold, Ruedi; Anderson, Leigh; Bethem, Robert; Blonder, Josip; Boja, Emily; Botelho, Julianne; Boyne, Michael; Bradshaw, Ralph A.; Burlingame, Alma L.; Chan, Daniel; Keshishian, Hasmik; Kuhn, Eric; Kinsinger, Christopher; Lee, Jerry S.H.; Lee, Sang-Won; Moritz, Robert; Oses-Prieto, Juan; Rifai, Nader; Ritchie, James; Rodriguez, Henry; Srinivas, Pothur R.; Townsend, R. Reid; Van Eyk, Jennifer; Whiteley, Gordon; Wiita, Arun; Weintraub, Susan
2014-01-01
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this “fit-for-purpose” approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and
Carr, Steven A.; Abbateillo, Susan E.; Ackermann, Bradley L.; Borchers, Christoph H.; Domon, Bruno; Deutsch, Eric W.; Grant, Russel; Hoofnagle, Andrew N.; Huttenhain, Ruth; Koomen, John M.; Liebler, Daniel; Liu, Tao; MacLean, Brendan; Mani, DR; Mansfield, Elizabeth; Neubert, Hendrik; Paulovich, Amanda G.; Reiter, Lukas; Vitek, Olga; Aebersold, Ruedi; Anderson, Leigh N.; Bethem, Robert; Blonder, Josip; Boja, Emily; Botelho, Julianne; Boyne, Michael; Bradshaw, Ralph A.; Burlingame, Alma S.; Chan, Daniel W.; Keshishian, Hasmik; Kuhn, Eric; Kingsinger, Christopher R.; Lee, Jerry S.; Lee, Sang-Won; Moritz, Robert L.; Oses-Prieto, Juan; Rifai, Nader; Ritchie, James E.; Rodriguez, Henry; Srinivas, Pothur R.; Townsend, Reid; Van Eyk , Jennifer; Whiteley, Gordon; Wiita, Arun; Weintraub, Susan
2014-01-14
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this “fit-for-purpose” approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and
Magnetism in curved geometries
NASA Astrophysics Data System (ADS)
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-09-01
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii-Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. These recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
Magnetism in curved geometries
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-08-17
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
Magnetism in curved geometries
Streubel, Robert; Fischer, Peter; Kronast, Florian; ...
2016-08-17
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. Asmore » a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. Finally, these recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.« less
Alternative Forms of Fit in Contingency Theory.
ERIC Educational Resources Information Center
Drazin, Robert; Van de Ven, Andrew H.
1985-01-01
This paper examines the selection, interaction, and systems approaches to fit in structural contingency theory. The concepts of fit evaluated may be applied not only to structural contingency theory but to contingency theories in general. (MD)
fits2hdf: FITS to HDFITS conversion
NASA Astrophysics Data System (ADS)
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-05-01
fits2hdf ports FITS files to Hierarchical Data Format (HDF5) files in the HDFITS format. HDFITS allows faster reading of data, higher compression ratios, and higher throughput. HDFITS formatted data can be presented transparently as an in-memory FITS equivalent by changing the import lines in Python-based FITS utilities. fits2hdf includes a utility to port MeasurementSets (MS) to HDF5 files.
UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE
Sanders, N. E.; Soderberg, A. M.; Betancourt, M.
2015-02-10
Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.
Limitations in biexponential fitting of NMR inversion-recovery curves
NASA Astrophysics Data System (ADS)
Shazeeb, Mohammed Salman; Sotak, Christopher H.
2017-03-01
NMR relaxation agents have long been employed as contrast agents in MRI. In many cases, the contrast agent is confined to either (i) the vascular and/or extracellular compartment (EC), as is the case with gadolinium(III)-based agents, or (ii) the intracellular compartment (IC), as is the case with manganese(II) ions. The compartmentalization of contrast agents often results in tissue-water 1H relaxation profiles that are well modeled as biexponential. It has long been recognized that water exchange between compartments modifies the biexponential relaxation parameters (amplitudes and rate constants) from those that would be found in the absence of exchange. Nevertheless, interpretation in terms of an ;apparent; two-compartment biophysical model, apparent EC vs. apparent IC, can provide insight into tissue structure and function, and changes therein, in the face of physiologic challenge. The accuracy of modeling biexponential data is highly dependent upon the amplitudes, rate constants, and signal-to-noise characterizing the data. Herein, simulated (in silico) inversion-recovery relaxation data are modeled by standard, nonlinear-least-squares analysis and the error in parameter values assessed for a range of amplitudes and rate constants characteristic of in vivo systems following administration of contrast agent. The findings provide guidance for laboratories seeking to exploit contrast-agent-driven, biexponential relaxation to differentiate MRI-based compartmental properties, including the apparent diffusion coefficient.
Curve Fitting Solar Cell Degradation Due to Hard Particle Radiation
NASA Technical Reports Server (NTRS)
Gaddy, Edward M.; Cikoski, Rebecca; Mekadenaumporn, Danchai
2003-01-01
This paper investigates the suitability of the equation for accurately defining solar cell parameter degradation as a function of hard particle radiation. The paper also provides methods for determining the constants in the equation and compares results from this equation to those obtained by the more traditionally used.
Frequency-Domain Identification With Composite Curve Fitting
NASA Technical Reports Server (NTRS)
Bayard, David S.
1994-01-01
Improved method of parameter identification based on decomposing single wide-band model into two or more component systems in parallel. Each component model predominates in specific frequency range. Wide-band mathematical model of system identified as two narrow-band models: one containing most of information on high-frequency components of dynamics, and one containing most of information on low-frequency components. Applicable to diverse systems, including vibrating structures, electronic circuits, and control systems.
Curve Fitting and First Quadrant Plotting Program (CURVPLOT).
designed to be used on the Wang 720B Electronic Programmable Calculator and 702 Plotting Output Writer combination. The regular plotter typing element is replaced with an IBM Dual Gothic typing element. (Author)
Probabilistic Multi-Factor Interaction Model for Evaluating Continuous Smooth Curves
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from launch external tanks. The MFIM has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. They are selected so that the MFIM will generate a common curve to fit the individual point data. The results show that the approach used generates the continuous curve.
Hamiltonian inclusive fitness: a fitter fitness concept
Costa, James T.
2013-01-01
In 1963–1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's ‘special difficulty’—the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of ‘offspring ideas’ that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated. PMID:24132089
Properties of Rasch residual fit statistics.
Wu, Margaret; Adams, Richard J
2013-01-01
This paper examines the residual-based fit statistics commonly used in Rasch measurement. In particular, the paper analytically examines some of the theoretical properties of the residual-based fit statistics with a view to establishing the inferences that can be made using these fit statistics. More specifically, the relationships between the distributional properties of the fit statistics and sample size are discussed; some research that erroneously concludes that residual-based fit statistics are unstable is reviewed; and finally, it is analytically illustrated that, for dichotomous items, residual-based fit statistics provide a measure of the relative slope of empirical item characteristic curves. With a clear understanding of the theoretical properties of the fit statistics, the use and limitations of these statistics can be placed in the right light.
Cook, Daniel W; Rutan, Sarah C; Stoll, Dwight R; Carr, Peter W
2015-02-15
Comprehensive two-dimensional liquid chromatography (LC×LC) is rapidly evolving as the preferred method for the analysis of complex biological samples owing to its much greater resolving power compared to conventional one-dimensional (1D-LC). While its enhanced resolving power makes this method appealing, it has been shown that the precision of quantitation in LC×LC is generally not as good as that obtained with 1D-LC. The poorer quantitative performance of LC×LC is due to several factors including but not limited to the undersampling of the first dimension and the dilution of analytes during transit from the first dimension ((1)D) column to the second dimension ((2)D) column, and the larger relative background signals. A new strategy, 2D assisted liquid chromatography (2DALC), is presented here. 2DALC makes use of a diode array detector placed at the end of each column, producing both multivariate (1)D and two-dimensional (2D) chromatograms. The increased resolution of the analytes provided by the addition of a second dimension of separation enables the determination of analyte absorbance spectra from the (2)D detector signal that are relatively pure and can be used to initiate the treatment of data from the first dimension detector using multivariate curve resolution-alternating least squares (MCR-ALS). In this way, the approach leverages the strengths of both separation methods in a single analysis: the (2)D detector data is used to provide relatively pure analyte spectra to the MCR-ALS algorithm, and the final quantitative results are obtained from the resolved (1)D chromatograms, which has a much higher sampling rate and lower background signal than obtained in conventional single detector LC×LC, to obtain accurate and precise quantitative results. It is shown that 2DALC is superior to both single detector selective or comprehensive LC×LC and 1D-LC for quantitation of compounds that appear as severely overlapped peaks in the (1)D chromatogram - this is
Baldassarre, Maurizio; Li, Chenge; Eremina, Nadejda; Goormaghtigh, Erik; Barth, Andreas
2015-07-10
Infrared spectroscopy is a powerful tool in protein science due to its sensitivity to changes in secondary structure or conformation. In order to take advantage of the full power of infrared spectroscopy in structural studies of proteins, complex band contours, such as the amide I band, have to be decomposed into their main component bands, a process referred to as curve fitting. In this paper, we report on an improved curve fitting approach in which absorption spectra and second derivative spectra are fitted simultaneously. Our approach, which we name co-fitting, leads to a more reliable modelling of the experimental data because it uses more spectral information than the standard approach of fitting only the absorption spectrum. It also avoids that the fitting routine becomes trapped in local minima. We have tested the proposed approach using infrared absorption spectra of three mixed α/β proteins with different degrees of spectral overlap in the amide I region: ribonuclease A, pyruvate kinase, and aconitase.
Gray, William G.; Miller, Cass T.
2010-01-01
This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model. PMID:21197134
Gray, William G; Miller, Cass T
2010-12-01
This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model.
Analysis of Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Erdem, A.; Budding, E.; Rhodes, M. D.; Püsküllü, Ç.; Soydugan, F.; Soydugan, E.; Tüysüz, M.; Demircan, O.
2015-07-01
We have applied the close binary system analysis package WINFITTER to a variety of exoplanet transiting light curves taken both from the NASA Exoplanet Archive and our own ground-based observations. WINFitter has parameter options for a realistic physical model, including gravity brightening and structural parameters derived from Kopal's applications of the relevant Radau equation, and it includes appropriate tests for determinacy and adequacy of its best fitting parameter sets. We discuss a number of issues related to empirical checking of models for stellar limb darkening, surface maculation, Doppler beaming, microvariability, and transit time variation (TTV) effects. The Radau coefficients used in the light curve modeling, in principle, allow structural models of the component stars to be tested.
2011-01-01
Background Disability weights (DWs) are important for estimating burden of disease in terms of disability-adjusted life years. The previous practice of eliciting DWs by expert opinion has been challenged. More recent approaches employed quality of life (QoL) questionnaires to establish patient-based DWs, but results are ambiguous. Methods In early 2010, we administered a questionnaire pertaining to physical fitness to 200 schoolchildren in Côte d'Ivoire. Helminth and Plasmodium spp. infections were determined and schoolchildren's physical fitness objectively measured in a maximal multistage 20 m shuttle run test. Associations between objectively measured and self-reported physical fitness and between self-reported physical fitness and infection status were determined. Spearman rank correlation coefficient, uni- and multivariable linear regression models adjusting for children's age and sex, ambient air temperature and humidity, Fisher's test, χ² and t-test statistics were used for statistical analysis. Results The prevalence of Schistosoma haematobium, Plasmodium spp., Schistosoma mansoni, hookworm and Ascaris lumbricoides in 167 children with complete parasitological results was 84.4%, 74.9%, 54.5%, 14.4% and 1.2%, respectively. High infection intensities and multiple species parasite infections were common. In the 137 children with complete data also from the shuttle run test, we found statistically significant correlations between objectively measured and self-reported physical fitness. However, no statistically significant correlation between the children's parasitic infection status and self-reported physical fitness was identified. An attrition analysis revealed considerably lower self-reported physical fitness scores of parasitized children who were excluded from shuttle run testing due to medical concerns in comparison to parasitized children who were able to successfully complete the shuttle run test. Conclusions Our QoL questionnaire proofed valid to
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely
Barbour, April M; Schmidt, Stephan; Zhuang, Luning; Rand, Kenneth; Derendorf, Hartmut
2014-01-01
The purpose of this report was to compare two different methods for dose optimisation of antimicrobials. The probability of target attainment (PTA) was calculated using Monte Carlo simulation to predict the PK/PD target of fT>MIC or modelling and simulation of time-kill curve data. Ceftobiprole, the paradigm compound, activity against two MRSA strains was determined, ATCC 33591 (MIC=2mg/L) and a clinical isolate (MIC=1mg/L). A two-subpopulation model accounting for drug degradation during the experiment adequately fit the time-kill curve data (concentration range 0.25-16× MIC). The PTA was calculated for plasma, skeletal muscle and subcutaneous adipose tissue based on data from a microdialysis study in healthy volunteers. A two-compartment model with distribution factors to account for differences between free serum and tissue interstitial space fluid concentration appropriately fit the pharmacokinetic data. Pharmacodynamic endpoints of fT>MIC of 30% or 40% and 1- or 2-log kill were used. The PTA was >90% in all tissues based on the PK/PD endpoint of fT>MIC >40%. The PTAs based on a 1- or 2-log kill from the time-kill experiments were lower than those calculated based on fT>MIC. The PTA of a 1-log kill was >90% for both MRSA isolates for plasma and skeletal muscle but was slightly below 90% for subcutaneous adipose tissue (both isolates ca. 88%). The results support a dosing regimen of 500mg three times daily as a 2-h intravenous infusion. This dose should be confirmed as additional pharmacokinetic data from various patient populations become available.
ERIC Educational Resources Information Center
Steen, Thomas B.; And Others
1990-01-01
Describes decline in youth fitness, emphasizing role of camping programs in youth fitness education. Describes Michigan camp's fitness program, consisting of daily workouts, fitness education, and record keeping. Describes fitness consultants' role in program. Discusses program's highlights and problems, suggesting changes for future use. Shows…
Line Fitting Using Only High School Algebra.
ERIC Educational Resources Information Center
Staib, John
1982-01-01
An approach to using the method of least squares, a scheme for computing the best-fitting line directly from a set of points, is detailed. The material first looks at fitting a numerical value to a set of numbers. This provides tools for solving the line-fitting problem. (MP)
A causal dispositional account of fitness.
Triviño, Vanessa; Nuño de la Rosa, Laura
2016-09-01
The notion of fitness is usually equated to reproductive success. However, this actualist approach presents some difficulties, mainly the explanatory circularity problem, which have lead philosophers of biology to offer alternative definitions in which fitness and reproductive success are distinguished. In this paper, we argue that none of these alternatives is satisfactory and, inspired by Mumford and Anjum's dispositional theory of causation, we offer a definition of fitness as a causal dispositional property. We argue that, under this framework, the distinctiveness that biologists usually attribute to fitness-namely, the fact that fitness is something different from both the physical traits of an organism and the number of offspring it leaves-can be explained, and the main problems associated with the concept of fitness can be solved. Firstly, we introduce Mumford and Anjum's dispositional theory of causation and present our definition of fitness as a causal disposition. We explain in detail each of the elements involved in our definition, namely: the relationship between fitness and the functional dispositions that compose it, the emergent character of fitness, and the context-sensitivity of fitness. Finally, we explain how fitness and realized fitness, as well as expected and realized fitness are distinguished in our approach to fitness as a causal disposition.
Fit for purpose: Australia's National Fitness Campaign.
Collins, Julie A; Lekkas, Peter
2011-12-19
During a time of war, the federal government passed the National Fitness Act 1941 to improve the fitness of the youth of Australia and better prepare them for roles in the armed services and industry. Implementation of the National Fitness Act made federal funds available at a local level through state-based national fitness councils, which coordinated promotional campaigns, programs, education and infrastructure for physical fitness, with volunteers undertaking most of the work. Specifically focused on children and youth, national fitness councils supported the provision of children's playgrounds, youth clubs and school camping programs, as well as the development of physical education in schools and its teaching and research in universities. By the time the Act was repealed in 1994, fitness had become associated with leisure and recreation rather than being seen as equipping people for everyday life and work. The emergence of the Australian National Preventive Health Agency Act 2010 offers the opportunity to reflect on synergies with its historic precedent.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
A strategy to model nonmonotonic dose-response curve and estimate IC50.
Zhang, Hui; Holden-Wiltse, Jeanne; Wang, Jiong; Liang, Hua
2013-01-01
The half-maximal inhibitory concentration IC[Formula: see text] is an important pharmacodynamic index of drug effectiveness. To estimate this value, the dose response relationship needs to be established, which is generally achieved by fitting monotonic sigmoidal models. However, recent studies on Human Immunodeficiency Virus (HIV) mutants developing resistance to antiviral drugs show that the dose response curve may not be monotonic. Traditional models can fail for nonmonotonic data and ignore observations that may be of biologic significance. Therefore, we propose a nonparametric model to describe the dose response relationship and fit the curve using local polynomial regression. The nonparametric approach is shown to be promising especially for estimating the IC[Formula: see text] of some HIV inhibitory drugs, in which there is a dose-dependent stimulation of response for mutant strains. This model strategy may be applicable to general pharmacologic, toxicologic, or other biomedical data that exhibits a nonmonotonic dose response relationship for which traditional parametric models fail.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus
ProFit: Bayesian galaxy fitting tool
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Taranu, D.; Tobar, R.
2016-12-01
ProFit is a Bayesian galaxy fitting tool that uses the fast C++ image generation library libprofit (ascl:1612.003) and a flexible R interface to a large number of likelihood samplers. It offers a fully featured Bayesian interface to galaxy model fitting (also called profiling), using mostly the same standard inputs as other popular codes (e.g. GALFIT ascl:1104.010), but it is also able to use complex priors and a number of likelihoods.
Initial Status in Growth Curve Modeling for Randomized Trials
Chou, Chih-Ping; Chi, Felicia; Weisner, Constance; Pentz, MaryAnn; Hser, Yih-Ing
2010-01-01
The growth curve modeling (GCM) technique has been widely adopted in longitudinal studies to investigate progression over time. The simplest growth profile involves two growth factors, initial status (intercept) and growth trajectory (slope). Conventionally, all repeated measures of outcome are included as components of the growth profile, and the first measure is used to reflect the initial status. Selection of the initial status, however, can greatly influence study findings, especially for randomized trials. In this article, we propose an alternative GCM approach involving only post-intervention measures in the growth profile and treating the first wave after intervention as the initial status. We discuss and empirically illustrate how choices of initial status may influence study conclusions in addressing research questions in randomized trials using two longitudinal studies. Data from two randomized trials are used to illustrate that the alternative GCM approach proposed in this article offers better model fitting and more meaningful results. PMID:21572585
Mixture Modeling of Individual Learning Curves
ERIC Educational Resources Information Center
Streeter, Matthew
2015-01-01
We show that student learning can be accurately modeled using a mixture of learning curves, each of which specifies error probability as a function of time. This approach generalizes Knowledge Tracing [7], which can be viewed as a mixture model in which the learning curves are step functions. We show that this generality yields order-of-magnitude…
A method for evaluating models that use galaxy rotation curves to derive the density profiles
NASA Astrophysics Data System (ADS)
de Almeida, Álefe O. F.; Piattella, Oliver F.; Rodrigues, Davi C.
2016-11-01
There are some approaches, either based on General Relativity (GR) or modified gravity, that use galaxy rotation curves to derive the matter density of the corresponding galaxy, and this procedure would either indicate a partial or a complete elimination of dark matter in galaxies. Here we review these approaches, clarify the difficulties on this inverted procedure, present a method for evaluating them, and use it to test two specific approaches that are based on GR: the Cooperstock-Tieu (CT) and the Balasin-Grumiller (BG) approaches. Using this new method, we find that neither of the tested approaches can satisfactorily fit the observational data without dark matter. The CT approach results can be significantly improved if some dark matter is considered, while for the BG approach no usual dark matter halo can improve its results.
Goble, Daniel J; Hearn, Mason C; Baweja, Harsimran S
2017-01-01
Atypically high postural sway measured by a force plate is a known risk factor for falls in older adults. Further, it has been shown that small, but significant, reductions in postural sway are possible with various balance exercise interventions. In the present study, a new low-cost force-plate technology called the Balance Tracking System (BTrackS) was utilized to assess postural sway of older adults before and after 90 days of a well-established exercise program called Geri-Fit. Results showed an overall reduction in postural sway across all participants from pre- to post-intervention. However, the magnitude of effects was significantly influenced by the amount of postural sway demonstrated by individuals prior to Geri-Fit training. Specifically, more participants with atypically high postural sway pre-intervention experienced an overall postural sway reduction. These reductions experienced were typically greater than the minimum detectable change statistic for the BTrackS Balance Test. Taken together, these findings suggest that BTrackS is an effective means of identifying older adults with elevated postural sway, who are likely to benefit from Geri-Fit training to mitigate fall risk. PMID:28228655
Goble, Daniel J; Hearn, Mason C; Baweja, Harsimran S
2017-01-01
Atypically high postural sway measured by a force plate is a known risk factor for falls in older adults. Further, it has been shown that small, but significant, reductions in postural sway are possible with various balance exercise interventions. In the present study, a new low-cost force-plate technology called the Balance Tracking System (BTrackS) was utilized to assess postural sway of older adults before and after 90 days of a well-established exercise program called Geri-Fit. Results showed an overall reduction in postural sway across all participants from pre- to post-intervention. However, the magnitude of effects was significantly influenced by the amount of postural sway demonstrated by individuals prior to Geri-Fit training. Specifically, more participants with atypically high postural sway pre-intervention experienced an overall postural sway reduction. These reductions experienced were typically greater than the minimum detectable change statistic for the BTrackS Balance Test. Taken together, these findings suggest that BTrackS is an effective means of identifying older adults with elevated postural sway, who are likely to benefit from Geri-Fit training to mitigate fall risk.
NASA Astrophysics Data System (ADS)
Putro, Sapto P.; Muhammad, Fuad; Aininnur, Amalia; Widowati; Suhartana
2017-02-01
Floating net cage is one of the aquaculture practice operated in Indonesian coastal areas that has been growing rapidly over the last two decades. This study is aimed to assess the roles of macrobenthic mollusks as bioindicator in response to environmental disturbance caused by fish farming activities, and compare the samples within the locations using graphical methods. The research was done at the floating net cage fish farming area in the Awerange Gulf, South Sulawesi, Indonesia at the coordinates between 79°0500‧- 79°1500‧ LS and 953°1500‧- 953°2000‧ BT, at the polyculture and reference areas, which was located 1 km away from farming area. Sampling period was conducted between October 2014 to June 2015. The sediment samples were taken from the two locations with two sampling time and three replicates using Van Veen Grab for biotic and abiotic assessment. Mollusks as biotic parameter were fixed using 4% formalin solution and were preserved using 70% ethanol solution after 1mm mesh size. The macrobenthic mollusks were found as many as 15 species consisting of 14 families and 2 classes (gastropods and bivalves). Based on cumulative k-dominance analysis projected on each station, the line of station K3T1 (reference area; first sampling time) and KJAB P3T2 (polyculture area; second sampling time) are located below others curves, indicating the highest evenness and diversity compared to the other stations, whereas station K2T1 (reference area; first sampling time) and K3T2 (polyculture area, second sampling time) are located on the top, indicate the lowest value of evenness and diversity. Based on the bubble plots NMDS ordination, the four dominant taxa/species did not clearly show involvement in driving/shifting the ordinate position of station on the graph, except T. agilis. However, the two species showed involvement in driving/shifting the ordinate position of two stations of the reference areas from the first sampling time by Rynoclavis sordidula
NASA Astrophysics Data System (ADS)
Dalah, E.; Bradley, D.; Nisbet, A.
2010-07-01
One unique feature of positron emission tomography (PET) is that it allows measurements of regional tracer concentration in hypoxic tumour-bearing tissue, supporting the need for accurate radiotherapy treatment planning. Generally the data are taken over multiple time frames, in the form of tissue activity curves (TACs), providing an indication of the presence of hypoxia, the degree of oxygen perfusion, vascular geometry and hypoxia fraction. In order to understand such a complicated phenomenon a number of theoretical studies have attempted to describe tracer uptake in tissue cells. More recently, a novel computerized reaction diffusion equation method developed by Kelly and Brady has allowed simulation of the realistic TACs of 18F-FMISO, with representation of physiological oxygen heterogeneity and tracer kinetics. We present a refinement to the work of Kelly and Brady, with a particular interest in simulating TACs of the most promising hypoxia selective tracer, 64Cu-ATSM, demonstrating its potential role in tumour sub-volume delineation for radiotherapy treatment planning. Simulation results have demonstrated the high contrast of imaging using ATSM, with a tumour to blood ratio ranging 2.24-4.1. Similarly, results of tumour sub-volumes generated using three different thresholding methods were all well correlated.
Sharma, Kusum; Modi, Manish; Kaur, Harsimran; Sharma, Aman; Ray, Pallab; Varma, Subhash
2015-10-01
Timely and rapid diagnosis of multidrug resistance in tuberculous meningitis (TBM) is a challenge both for a microbiologist and neurologist. The present study was conducted to evaluate role of real-time polymerase chain reaction (PCR) using rpoB, IS6110, and MPB64 as targets in diagnosis of TBM in 110 patients and subsequent high-resolution melt (HRM) curve analysis of rpoB gene amplicons for screening of drug resistance. The sensitivity of smear, culture, and real-time PCR was 1.8%, 10.9%, and 83.63%, respectively. All 120 control patients showed negative results. With HRM rpoB analysis, rifampicin resistance was detected in 3 out of 110 cases of TBM (3.33%). Subsequently, results of HRM analysis were confirmed by rpoB gene sequencing, and mutations were observed at 516 (2 patients) and 531 (1 patient) codons, respectively. rpoB HRM analysis can be a promising tool for rapid diagnosis and screening of drug resistance in TBM patients in 90minutes.
De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano
2012-11-21
Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.
Wullschleger, Stan D; Gu, Lianhong; Pallardy, Stephen G.; Tu, Kevin; Law, Beverly E.
2010-01-01
The Farquhar-von Caemmerer-Berry (FvCB) model of photosynthesis is a change-point model and structurally overparameterized for interpreting the response of leaf net assimilation (A) to intercellular CO{sub 2} concentration (Ci). The use of conventional fitting methods may lead not only to incorrect parameters but also several previously unrecognized consequences. For example, the relationships between key parameters may be fixed computationally and certain fits may be produced in which the estimated parameters result in contradictory identification of the limitation states of the data. Here we describe a new approach that is better suited to the FvCB model characteristics. It consists of four main steps: (1) enumeration of all possible distributions of limitation states; (2) fitting the FvCB model to each limitation state distribution by minimizing a distribution-wise cost function that has desirable properties for parameter estimation; (3) identification and correction of inadmissible fits; and (4) selection of the best fit from all possible limitation state distributions. The new approach implemented theoretical parameter resolvability with numerical procedures that maximally use the information content of the data. It was tested with model simulations, sampled A/Ci curves, and chlorophyll fluorescence measurements of different tree species. The new approach is accessible through the automated website leafweb.ornl.gov.
Accelerating Around an Unbanked Curve
NASA Astrophysics Data System (ADS)
Mungan, Carl E.
2006-02-01
The December 2004 issue of TPT presented a problem concerning how a car should accelerate around an unbanked curve of constant radius r starting from rest if it is to avoid skidding. Interestingly enough, two solutions were proffered by readers.2 The purpose of this note is to compare and contrast the two approaches. Further experimental investigation of various turning strategies using a remote-controlled car and overhead video analysis could make for an interesting student project.
Disentangling planetary and stellar activity features in the CoRoT-2 light curve
NASA Astrophysics Data System (ADS)
Bruno, G.; Deleuil, M.; Almenara, J.-M.; Barros, S. C. C.; Lanza, A. F.; Montalto, M.; Boisse, I.; Santerne, A.; Lagrange, A.-M.; Meunier, N.
2016-11-01
Aims: Stellar activity is an important source of systematic errors and uncertainties in the characterization of exoplanets. Most of the techniques used to correct for this activity focus on an ad hoc data reduction. Methods: We have developed a software for the combined fit of transits and stellar activity features in high-precision long-duration photometry. Our aim is to take advantage of the modelling to derive correct stellar and planetary parameters, even in the case of strong stellar activity. Results: We use an analytic approach to model the light curve. The code KSint, modified by adding the evolution of active regions, is implemented into our Bayesian modelling package PASTIS. The code is then applied to the light curve of CoRoT-2. The light curve is divided in segments to reduce the number of free parameters needed by the fit. We perform a Markov chain Monte Carlo analysis in two ways. In the first, we perform a global and independent modelling of each segment of the light curve, transits are not normalized and are fitted together with the activity features, and occulted features are taken into account during the transit fit. In the second, we normalize the transits with a model of the non-occulted activity features, and then we apply a standard transit fit, which does not take the occulted features into account. Conclusions: Our model recovers the activity features coverage of the stellar surface and different rotation periods for different features. We find variations in the transit parameters of different segments and show that they are likely due to the division applied to the light curve. Neglecting stellar activity or even only bright spots while normalizing the transits yields a 1.2σ larger and 2.3σ smaller transit depth, respectively. The stellar density also presents up to 2.5σ differences depending on the normalization technique. Our analysis confirms the inflated radius of the planet (1.475 ± 0.031RJ) found by other authors. We show that
Regime Switching in the Latent Growth Curve Mixture Model
ERIC Educational Resources Information Center
Dolan, Conor V.; Schmittmann, Verena D.; Lubke, Gitta H.; Neale, Michael C.
2005-01-01
A linear latent growth curve mixture model is presented which includes switching between growth curves. Switching is accommodated by means of a Markov transition model. The model is formulated with switching as a highly constrained multivariate mixture model and is fitted using the freely available Mx program. The model is illustrated by analyzing…
ERIC Educational Resources Information Center
Nordmark, Arne; Essen, Hanno
2007-01-01
The equilibrium of a flexible inextensible string, or chain, in the centrifugal force field of a rotating reference frame is investigated. It is assumed that the end points are fixed on the rotation axis. The shape of the curve, the skipping rope curve or "troposkien", is given by the Jacobi elliptic function sn. (Contains 3 figures.)
Anodic Polarization Curves Revisited
ERIC Educational Resources Information Center
Liu, Yue; Drew, Michael G. B.; Liu, Ying; Liu, Lin
2013-01-01
An experiment published in this "Journal" has been revisited and it is found that the curve pattern of the anodic polarization curve for iron repeats itself successively when the potential scan is repeated. It is surprising that this observation has not been reported previously in the literature because it immediately brings into…
Searcy, James Kincheon
1959-01-01
The flow-duration curve is a cumulative frequency curve that shows the percent of time specified discharges were equaled or exceeded during a given period. It combines in one curve the flow characteristics of a stream throughout the range of discharge, without regard to the sequence of occurrence. If the period upon which the curve is based represents the long-term flow of a stream, the curve may be used to predict the distribution of future flows for water- power, water-supply, and pollution studies. This report shows that differences in geology affect the low-flow ends of flow-duration curves of streams in adjacent basins. Thus, duration curves are useful in appraising the geologic characteristics of drainage basins. A method for adjusting flow-duration curves of short periods to represent long-term conditions is presented. The adjustment is made by correlating the records of a short-term station with those of a long-term station.
Simulating Supernova Light Curves
Even, Wesley Paul; Dolence, Joshua C.
2016-05-05
This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.
ERIC Educational Resources Information Center
Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio
2017-01-01
In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.
A new methodology for free wake analysis using curved vortex elements
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.
1987-01-01
A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.
NASA Technical Reports Server (NTRS)
Dellacorte, Christopher; Howard, S. Adam
2015-01-01
Ball bearings require proper fit and installation into machinery structures (onto shafts and into bearing housings) to ensure optimal performance. For some applications, both the inner and outer race must be mounted with an interference fit and care must be taken during assembly and disassembly to avoid placing heavy static loads between the balls and races otherwise Brinell dent type damage can occur. In this paper, a highly dent resistant superelastic alloy, 60NiTi, is considered for rolling element bearing applications that encounter excessive static axial loading during assembly or disassembly. A small (R8) ball bearing is designed for an application in which access to the bearing races to apply disassembly tools is precluded. First Principles analyses show that by careful selection of materials, raceway curvature and land geometry, a bearing can be designed that allows blind assembly and disassembly without incurring raceway damage due to ball denting. Though such blind assembly applications are uncommon, the availability of bearings with unusually high static load capability may enable more such applications with additional benefits, especially for miniature bearings.
Johnson, James H.; McKenna, James E.; Dropkin, David S.; Andrews, William D.
2005-01-01
We examined the growth characteristics of 303 Atlantic sturgeon, Acipenser oxyrinchus, caught in the commercial fishery off the New Jersey coast from 1992 to 1994 (fork length range: 93–219 cm). Sections taken from the leading pectoral fin ray were used to age each sturgeon. Ages ranged from 5–26 years. Von Bertalanffy growth models for males and females fit well, but test statistics (t-test, maximum likelihood) failed to reject the null hypothesis that growth was not significantly different between sexes. Consequently, all data were pooled and the combined data gave L∞ and K estimates of 174.2 cm and 0.144, respectively. Our growth data do not fit the pattern of slower growth and increased size in more northernly latitudes for Atlantic sturgeon observed in other work. Lack of uniformity of our growth data may be due to (1) the sturgeon fishery harvesting multiple stocks having different growth rates, and (2) size limits for the commercial fishery having created a bias in estimating growth parameters.
HfS, Hyperfine Structure Fitting Tool
NASA Astrophysics Data System (ADS)
Estalella, Robert
2017-02-01
Hyperfine Structure Fitting (HfS) is a tool to fit the hyperfine structure of spectral lines with multiple velocity components. The HfS_nh3 procedures included in HfS simultaneously fit the hyperfine structure of the NH3 (J, K) = (1, 1) and (2, 2) transitions, and perform a standard analysis to derive {T}{ex}, NH3 column density, {T}{rot}, and {T}{{k}}. HfS uses a Monte Carlo approach for fitting the line parameters. Special attention is paid to the derivation of the parameter uncertainties. HfS includes procedures that make use of parallel computing for fitting spectra from a data cube.
CURVES: curve evolution for vessel segmentation.
Lorigo, L M; Faugeras, O D; Grimson, W E; Keriven, R; Kikinis, R; Nabavi, A; Westin, C F
2001-09-01
The vasculature is of utmost importance in neurosurgery. Direct visualization of images acquired with current imaging modalities, however, cannot provide a spatial representation of small vessels. These vessels, and their branches which show considerable variations, are most important in planning and performing neurosurgical procedures. In planning they provide information on where the lesion draws its blood supply and where it drains. During surgery the vessels serve as landmarks and guidelines to the lesion. The more minute the information is, the more precise the navigation and localization of computer guided procedures. Beyond neurosurgery and neurological study, vascular information is also crucial in cardiovascular surgery, diagnosis, and research. This paper addresses the problem of automatic segmentation of complicated curvilinear structures in three-dimensional imagery, with the primary application of segmenting vasculature in magnetic resonance angiography (MRA) images. The method presented is based on recent curve and surface evolution work in the computer vision community which models the object boundary as a manifold that evolves iteratively to minimize an energy criterion. This energy criterion is based both on intensity values in the image and on local smoothness properties of the object boundary, which is the vessel wall in this application. In particular, the method handles curves evolving in 3D, in contrast with previous work that has dealt with curves in 2D and surfaces in 3D. Results are presented on cerebral and aortic MRA data as well as lung computed tomography (CT) data.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
ERIC Educational Resources Information Center
Grosse, Susan J.
2009-01-01
This article discusses how families can increase family togetherness and improve physical fitness. The author provides easy ways to implement family friendly activities for improving and maintaining physical health. These activities include: walking, backyard games, and fitness challenges.
... page: //medlineplus.gov/ency/patientinstructions/000891.htm Outdoor fitness routine To use the sharing features on this ... you and is right for your level of fitness. Here are some ideas: Warm up first. Get ...
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1988-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1990-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
NASA Astrophysics Data System (ADS)
Dias, Marcelo A.; Santangelo, Christian D.
2011-03-01
Despite an almost two thousand year history, origami, the art of folding paper, remains a challenge both artistically and scientifically. Traditionally, origami is practiced by folding along straight creases. A whole new set of shapes can be explored, however, if, instead of straight creases, one folds along arbitrary curves. We present a mechanical model for curved fold origami in which the energy of a plastically-deformed crease is balanced by the bending energy of developable regions on either side of the crease. Though geometry requires that a sheet buckle when folded along a closed curve, its shape depends on the elasticity of the sheet. NSF DMR-0846582.
Piecewise quartic polynomial curves with a local shape parameter
NASA Astrophysics Data System (ADS)
Han, Xuli
2006-10-01
Piecewise quartic polynomial curves with a local shape parameter are presented in this paper. The given blending function is an extension of the cubic uniform B-splines. The changes of a local shape parameter will only change two curve segments. With the increase of the value of a shape parameter, the curves approach a corresponding control point. The given curves possess satisfying shape-preserving properties. The given curve can also be used to interpolate locally the control points with GC2 continuity. Thus, the given curves unify the representation of the curves for interpolating and approximating the control polygon. As an application, the piecewise polynomial curves can intersect an ellipse at different knot values by choosing the value of the shape parameter. The given curve can approximate an ellipse from the both sides and can then yield a tight envelope for an ellipse. Some computing examples for curve design are given.
Quasispecies on Fitness Landscapes.
Schuster, Peter
2016-01-01
Selection-mutation dynamics is studied as adaptation and neutral drift on abstract fitness landscapes. Various models of fitness landscapes are introduced and analyzed with respect to the stationary mutant distributions adopted by populations upon them. The concept of quasispecies is introduced, and the error threshold phenomenon is analyzed. Complex fitness landscapes with large scatter of fitness values are shown to sustain error thresholds. The phenomenological theory of the quasispecies introduced in 1971 by Eigen is compared to approximation-free numerical computations. The concept of strong quasispecies understood as mutant distributions, which are especially stable against changes in mutations rates, is presented. The role of fitness neutral genotypes in quasispecies is discussed.
Highly curved microchannel plates
NASA Technical Reports Server (NTRS)
Siegmund, O. H. W.; Cully, S.; Warren, J.; Gaines, G. A.; Priedhorsky, W.; Bloch, J.
1990-01-01
Several spherically curved microchannel plate (MCP) stack configurations were studied as part of an ongoing astrophysical detector development program, and as part of the development of the ALEXIS satellite payload. MCP pairs with surface radii of curvature as small as 7 cm, and diameters up to 46 mm have been evaluated. The experiments show that the gain (greater than 1.5 x 10 exp 7) and background characteristics (about 0.5 events/sq cm per sec) of highly curved MCP stacks are in general equivalent to the performance achieved with flat MCP stacks of similar configuration. However, gain variations across the curved MCP's due to variations in the channel length to diameter ratio are observed. The overall pulse height distribution of a highly curved surface MCP stack (greater than 50 percent FWHM) is thus broader than its flat counterpart (less than 30 percent). Preconditioning of curved MCP stacks gives comparable results to flat MCP stacks, but it also decreases the overall gain variations. Flat fields of curved MCP stacks have the same general characteristics as flat MCP stacks.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1995-01-01
In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.
ERIC Educational Resources Information Center
Marini, Isabella
2005-01-01
Human salivary [alpha]-amylase is used in this experimental approach to introduce biology high school students to the concept of enzyme activity in a dynamic way. Through a series of five easy, rapid, and inexpensive laboratory experiments students learn what the activity of an enzyme consists of: first in a qualitative then in a semi-quantitative…
NASA Astrophysics Data System (ADS)
Metcalfe, Travis S.
1999-05-01
I have developed a procedure utilizing a genetic-algorithm (GA) based optimization scheme to fit the observed light curves of an eclipsing binary star with a model produced by the Wilson-Devinney (W-D) code. The principal advantages of this approach are the global search capability and the objectivity of the final result. Although this method can be more efficient than some other comparably global search techniques, the computational requirements of the code are still considerable. I have applied this fitting procedure to my observations of the W UMa type eclipsing binary BH Cassiopeiae. An analysis of V-band CCD data obtained in 1994-1995 from Steward Observatory and U- and B-band photoelectric data obtained in 1996 from McDonald Observatory provided three complete light curves to constrain the fit. In addition, radial velocity curves obtained in 1997 from McDonald Observatory provided a direct measurement of the system mass ratio to restrict the search. The results of the GA-based fit are in excellent agreement with the final orbital solution obtained with the standard differential corrections procedure in the W-D code.
The Very Essentials of Fitness for Trial Assessment in Canada
ERIC Educational Resources Information Center
Newby, Diana; Faltin, Robert
2008-01-01
Fitness for trial constitutes the most frequent referral to forensic assessment services. Several approaches to this evaluation exist in Canada, including the Fitness Interview Test and Basic Fitness for Trial Test. The following article presents a review of the issues and a method for basic fitness for trial evaluation.
A new curriculum for fitness education.
Boone, J L
1983-01-01
Regular exercise is important in a preventive approach to health care because it exerts a beneficial effect on many risk factors in the development of coronary heart disease. However, many Americans lack the skills required to devise and carry out a safe and effective exercise program appropriate for a life-time of fitness. This inability is partly due to the lack of fitness education during their school years. School programs in physical education tend to neglect training in the health-related aspects of fitness. Therefore, a new curriculum for fitness education is proposed that would provide seventh, eighth, and ninth grade students with (a) a basic knowledge of their physiological response to exercise, (b) the means to develop their own safe and effective physical fitness program, and (c) the motivation to incorporate regular exercise into their lifestyle. This special 4-week segment of primarily academic study is designed to be inserted into the physical education curriculum. Daily lessons cover health-related fitness, cardiovascular fitness, body fitness, and care of the back. A final written examination covering major areas of information is given to emphasize this academic approach to exercise. Competition in athletic ability is deemphasized, and motivational awards are given based on health-related achievements. The public's present lack of knowledge about physical fitness, coupled with the numerous anatomical and physiological benefits derived from regular, vigorous exercise, mandate an intensified curriculum of fitness education for school children. PMID:6414039
Comparison of Two Algebraic Methods for Curve/curve Intersection
NASA Technical Reports Server (NTRS)
Demontaudouin, Y.; Tiller, W.
1985-01-01
Most geometric modeling systems use either polynomial or rational functions to represent geometry. In such systems most computational problems can be formulated as systems of polynomials in one or more variables. Classical elimination theory can be used to solve such systems. Here Cayley's method of elimination is summarized and it is shown how it can best be used to solve the curve/curve intersection problem. Cayley's method was found to be a more straightforward approach. Furthermore, it is computationally simpler, since the elements of the Cayley matrix are one variable instead of two variable polynomials. Researchers implemented and tested both methods and found Cayley's to be more efficient. Six pairs of curves, representing mixtures of lines, circles, and cubic arcs were used. Several examples had multiple intersection points. For all six cases Cayley's required less CPU time than the other method. The average time ratio of method 1 to method 2 was 3.13:1, the least difference was 2.33:1, and the most dramatic was 6.25:1. Both of the above methods can be extended to solve the surface/surface intersection problem.
Mathematics analysis of polymerase chain reaction kinetic curves.
Sochivko, D G; Fedorov, A A; Varlamov, D A; Kurochkin, V E; Petrov, R V
2016-01-01
The paper reviews different approaches to the mathematical analysis of polymerase chain reaction (PCR) kinetic curves. The basic principles of PCR mathematical analysis are presented. Approximation of PCR kinetic curves and PCR efficiency curves by various functions is described. Several PCR models based on chemical kinetics equations are suggested. Decision criteria for an optimal function to describe PCR efficiency are proposed.
Fitting C2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C2 continuous. PMID:24782911
2015-09-01
requirements and in managing personnel by tracking sailors who have acquired these skills. NEC Fit is one of two primary metrics that Navy leadership...with a rating. They are used in defining manpower requirements and in personnel management to track sailors who have acquired these skills. NEC Fit...of the new NEC requirements in the Total Force Manpower Management System (TFMMS) for Fit levels to reach their steady state. The primary reason for
Pickett, P.T.
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Pickett, Patrick T.
1981-01-01
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Leslie, Mark; Holloway, Charles A
2006-01-01
When a company launches a new product into a new market, the temptation is to immediately ramp up sales force capacity to gain customers as quickly as possible. But hiring a full sales force too early just causes the firm to burn through cash and fail to meet revenue expectations. Before it can sell an innovative product efficiently, the entire organization needs to learn how customers will acquire and use it, a process the authors call the sales learning curve. The concept of a learning curve is well understood in manufacturing. Employees transfer knowledge and experience back and forth between the production line and purchasing, manufacturing, engineering, planning, and operations. The sales learning curve unfolds similarly through the give-and-take between the company--marketing, sales, product support, and product development--and its customers. As customers adopt the product, the firm modifies both the offering and the processes associated with making and selling it. Progress along the manufacturing curve is measured by tracking cost per unit: The more a firm learns about the manufacturing process, the more efficient it becomes, and the lower the unit cost goes. Progress along the sales learning curve is measured in an analogous way: The more a company learns about the sales process, the more efficient it becomes at selling, and the higher the sales yield. As the sales yield increases, the sales learning process unfolds in three distinct phases--initiation, transition, and execution. Each phase requires a different size--and kind--of sales force and represents a different stage in a company's production, marketing, and sales strategies. Adjusting those strategies as the firm progresses along the sales learning curve allows managers to plan resource allocation more accurately, set appropriate expectations, avoid disastrous cash shortfalls, and reduce both the time and money required to turn a profit.
Escudero, Carlos
2009-08-15
Stochastic growth phenomena on curved interfaces are studied by means of stochastic partial differential equations. These are derived as counterparts of linear planar equations on a curved geometry after a reparametrization invariance principle has been applied. We examine differences and similarities with the classical planar equations. Some characteristic features are the loss of correlation through time and a particular behavior of the average fluctuations. Dependence on the metric is also explored. The diffusive model that propagates correlations ballistically in the planar situation is particularly interesting, as this propagation becomes nonuniversal in the new regime.
Biomedical model fitting and error analysis.
Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri
2011-09-20
This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.
Do the Kepler AGN light curves need reprocessing?
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.; Williams, Joshua; Carini, Michael T.
2015-10-01
We gauge the impact of spacecraft-induced effects on the inferred variability properties of the light curve of the Seyfert 1 AGN Zw 229-15 observed by Kepler. We compare the light curve of Zw 229-15 obtained from the Kepler MAST data base with a reprocessed light curve constructed from raw pixel data. We use the first-order structure function, SF(δt), to fit both light curves to the damped power-law PSD (power spectral density) of Kasliwal et al. On short time-scales, we find a steeper log PSD slope (γ = 2.90 to within 10 per cent) for the reprocessed light curve as compared to the light curve found on MAST (γ = 2.65 to within 10 per cent) - both inconsistent with a damped random walk (DRW) which requires γ = 2. The log PSD slope inferred for the reprocessed light curve is consistent with previous results that study the same reprocessed light curve. The turnover time-scale is almost identical for both light curves (27.1 and 27.5 d for the reprocessed and MAST data base light curves). Based on the obvious visual difference between the two versions of the light curve and on the PSD model fits, we conclude that there remain significant levels of spacecraft-induced effects in the standard pipeline reduction of the Kepler data. Reprocessing the light curves will change the model inferenced from the data but is unlikely to change the overall scientific conclusions reached by Kasliwal et al. - not all AGN light curves are consistent with the DRW.
Nonlinear bulging factor based on R-curve data
NASA Technical Reports Server (NTRS)
Jeong, David Y.; Tong, Pin
1994-01-01
In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.
Nonlinear bulging factor based on R-curve data
NASA Astrophysics Data System (ADS)
Jeong, David Y.; Tong, Pin
1994-09-01
In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.
Molecular dynamics simulations of the melting curve of NiAl alloy under pressure
Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli
2014-05-15
The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801){sup 0.298} (one-phase approach), 1850(1 + P/12.806){sup 0.357} (two-phase approach). The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment.
ERIC Educational Resources Information Center
Lawes, Jonathan F.
2013-01-01
Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…
Textbook Factor Demand Curves.
ERIC Educational Resources Information Center
Davis, Joe C.
1994-01-01
Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)
ERIC Educational Resources Information Center
Paulton, Richard J. L.
1991-01-01
A procedure that allows students to view an entire bacterial growth curve during a two- to three-hour student laboratory period is described. Observations of the lag phase, logarithmic phase, maximum stationary phase, and phase of decline are possible. A nonpathogenic, marine bacterium is used in the investigation. (KR)
ERIC Educational Resources Information Center
Nordholm, Catherine R.
This document makes a number of observations about physical fitness in America. Among them are: (1) the symptoms of aging (fat accumulation, lowered basal metabolic rate, loss of muscular strength, reduction in motor fitness, reduction in work capacity, etc.) are not the result of disease but disuse; (2) society conditions the individual to…
Physical Fitness and Counseling.
ERIC Educational Resources Information Center
Helmkamp, Jill M.
Human beings are a delicate balance of mind, body, and spirit, so an imbalance in one domain affects all others. The purpose of this paper is to examine the effects that physical fitness may have on such human characteristics as personality and behavior. A review of the literature reveals that physical fitness is related to, and can affect,…
Fitness in Special Populations.
ERIC Educational Resources Information Center
Shephard, Roy J.
This book examines fitness research among special populations, including research on fitness assessment, programming, and performance for persons with various forms of physical disabilities. The book covers such topics as diseases that complicate life in a wheelchair, disability classifications, physiological responses to training, positive…
ERIC Educational Resources Information Center
McNamara, Jeanne
This lesson plan introduces students to the concept of supply and demand by appealing to bodily/kinesthetic intelligences. Students participate in a fitness class and then analyze the economic motives behind making an individual feel better after a fitness activity; i.e., analyzing how much an individual would pay for a drink and snack after a…
Outfitting Campus Fitness Centers.
ERIC Educational Resources Information Center
Fickes, Michael
1999-01-01
Explains how universities and colleges, both private and public, are including fitness centers as ways of increasing their student enrollment levels. Comments are provided on school experiences in fitness-center design, equipment purchasing, and maintenance and operating-costs issues. (GR)
ERIC Educational Resources Information Center
Farrell, Anne; Faigenbaum, Avery; Radler, Tracy
2010-01-01
The urgency to improve fitness levels and decrease the rate of childhood obesity has been at the forefront of physical education philosophy and praxis. Few would dispute that school-age youth need to participate regularly in physical activities that enhance and maintain both skill- and health-related physical fitness. Regular physical activity…
ERIC Educational Resources Information Center
Klahr, Gary Peter
1992-01-01
Although the 1980's fitness craze is wearing off and adults are again becoming "couch potatoes," this trend does not justify expansion of high school compulsory physical education requirements. To encourage commitment to lifetime physical fitness, the Phoenix (Arizona) Union High School District offers students private showers, relaxed…
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
Trend analyses with river sediment rating curves
Warrick, Jonathan A.
2015-01-01
Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.
Investigation of learning and experience curves
Krawiec, F.; Thornton, J.; Edesess, M.
1980-04-01
The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.
Bäuerle, Felix; Zotter, Agnes; Schreiber, Gideon
2016-10-15
With computer-based data-fitting methods becoming a standard tool in biochemistry, progress curve analysis of enzyme kinetics is a feasible, yet seldom used tool. Here we present a versatile Matlab-based tool (PCAT) to analyze catalysis progress curves with three complementary model approaches. The first two models are based on the known closed-form solution for this problem: the first describes the required Lambert W function with an analytical approximation and the second provides a numerical solution of the Lambert W function. The third model is a direct simulation of the enzyme kinetics. Depending on the chosen model, the tools excel in speed, accuracy or initial value requirements. Using simulated and experimental data, we show the strengths and pitfalls of the different fitting models. Direct simulation proves to have the highest level of accuracy, but it also requires reasonable initial values to converge. Finally, we propose a standard procedure to obtain optimized enzyme kinetic parameters from single progress curves.
Prediction and extension of curves of distillation of vacuum residue using probability functions
NASA Astrophysics Data System (ADS)
León, A. Y.; Riaño, P. A.; Laverde, D.
2016-02-01
The use of the probability functions for the prediction of crude distillation curves has been implemented in different characterization studies for refining processes. The study of four functions of probability (Weibull extreme, Weibull, Kumaraswamy and Riazi), was analyzed in this work for the fitting of curves of distillation of vacuum residue. After analysing the experimental data was selected the Weibull extreme function as the best prediction function, the fitting capability of the best function was validated considering as criterions of estimation the AIC (Akaike Information Criterion), BIC (Bayesian information Criterion), and correlation coefficient R2. To cover a wide range of composition were selected fifty-five (55) vacuum residue derived from different hydrocarbon mixture. The parameters of the probability function Weibull Extreme were adjusted from simple measure properties such as Conradson Carbon Residue (CCR), and compositional analysis SARA (saturates, aromatics, resins and asphaltenes). The proposed method is an appropriate tool to describe the tendency of distillation curves and offers a practical approach in terms of classification of vacuum residues.
Gottschlich, Carsten
2012-04-01
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
AN Fitting Reconditioning Tool
NASA Technical Reports Server (NTRS)
Lopez, Jason
2011-01-01
A tool was developed to repair or replace AN fittings on the shuttle external tank (ET). (The AN thread is a type of fitting used to connect flexible hoses and rigid metal tubing that carry fluid. It is a U.S. military-derived specification agreed upon by the Army and Navy, hence AN.) The tool is used on a drill and is guided by a pilot shaft that follows the inside bore. The cutting edge of the tool is a standard-size replaceable insert. In the typical Post Launch Maintenance/Repair process for the AN fittings, the six fittings are removed from the ET's GUCP (ground umbilical carrier plate) for reconditioning. The fittings are inspected for damage to the sealing surface per standard operations maintenance instructions. When damage is found on the sealing surface, the condition is documented. A new AN reconditioning tool is set up to cut and remove the surface damage. It is then inspected to verify the fitting still meets drawing requirements. The tool features a cone-shaped interior at 36.5 , and may be adjusted at a precise angle with go-no-go gauges to insure that the cutting edge could be adjusted as it wore down. One tool, one setting block, and one go-no-go gauge were fabricated. At the time of this reporting, the tool has reconditioned/returned to spec 36 AN fittings with 100-percent success of no leakage. This tool provides a quick solution to repair a leaky AN fitting. The tool could easily be modified with different-sized pilot shafts to different-sized fittings.
Global Expression for Representing Diatomic Potential-Energy Curves
NASA Technical Reports Server (NTRS)
Ferrante, John; Schlosser, Herbert; Smith, John R.
1991-01-01
A three-parameter expression that gives an accurate fit to diatomic potential curves over the entire range of separation for charge transfers between 0 and 1. It is based on a generalization of the universal binding-energy relation of Smith et al. (1989) with a modification that describes the crossover from a partially ionic state to the neutral state at large separations. The expression is tested by comparison with first-principles calculations of the potential curves ranging from covalently bonded to ionically bonded. The expression is also used to calculate spectroscopic constants form a curve fit to the first-principles curves. A comparison is made with experimental values of the spectroscopic constants.
NASA Astrophysics Data System (ADS)
Brandenburg, J. P.
2013-08-01
Fault-propagation folds form an important trapping element in both onshore and offshore fold-thrust belts, and as such benefit from reliable interpretation. Building an accurate geologic interpretation of such structures requires palinspastic restorations, which are made more challenging by the interplay between folding and faulting. Trishear (Erslev, 1991; Allmendinger, 1998) is a useful tool to unravel this relationship kinematically, but is limited by a restriction to planar fault geometries, or at least planar fault segments. Here, new methods are presented for trishear along continuously curved reverse faults defining a flat-ramp transition. In these methods, rotation of the hanging wall above a curved fault is coupled to translation along a horizontal detachment. Including hanging wall rotation allows for investigation of structures with progressive backlimb rotation. Application of the new algorithms are shown for two fault-propagation fold structures: the Turner Valley Anticline in Southwestern Alberta, and the Alpha Structure in the Niger Delta.
Boyer, H.E.
1986-01-01
This Atlas was developed to serve engineers who are looking for fatigue data on a particular metal or alloy. Having these curves compiled in a single book will also facilitate the computerization of the involved data. It is pointed out that plans are under way to make the data in this book available in ASCII files for analysis by computer programs. S-N curves which typify effects of major variables are considered along with low-carbon steels, medium-carbon steels, alloy steels, HSLA steels, high-strength alloy steels, heat-resisting steels, stainless steels, maraging steels, cast irons, and heat-resisting alloys. Attention is also given to aluminum alloys, copper alloys, magnesium alloys, molybdenum, tin alloys, titanium and titanium alloys, zirconium, steel castings, closed-die forgings, powder metallurgy parts, composites, effects of surface treatments, and test results for component parts.
Light Curves of Type IA Supernovae
NASA Astrophysics Data System (ADS)
Ford, C. H.; Herbst, W.; Balonek, T. J.; Benson, P. J.; Chromey, F. R.; Ratcliff, S. J.
1992-05-01
VRI light curves of five Type Ia supernovae (1991B, 1991N, 1991T, 1991bg, and 1992G) have been obtained with CCDs attached to small telescopes at northeastern sites. The data have been carefully transformed to the standard system using images obtained with the 0.9m telescope at KPNO. The first three supernovae have faded sufficiently that we can carefully correct for the galactic background and, in particular, its effect on the determination of fade rates at late times. SN 1991bg clearly demonstrates that there can be gross differences among Type Ia's in the shape (and maximum brightness) of their light curves (Filippenko et al., preprint). We investigate whether a single "template" can be devised which fits the R and I light curve shapes of the other four supernovae in our sample, and the degree to which each fits the V template of Leibundgut (1988, Ph.D. thesis, U. of Basel). The distinctive secondary maximum seen in I (about 18 days after primary maximum; Balonek et al., preprint) should be useful for distinguishing peculiar Type Ia's like SN 1991bg, and for establishing the time of maximum brightness for supernovae that were discovered up to three weeks afterwards. We thank the W. M. Keck Foundation for their support of the Keck Northeast Astronomy Consortium. This project is an outgrowth of that support.
NASA Astrophysics Data System (ADS)
Frønsdal, Christian; Kontsevich, Maxim
2007-02-01
Deformation quantization on varieties with singularities offers perspectives that are not found on manifolds. The Harrison component of Hochschild cohomology, vanishing on smooth manifolds, reflects information about singularities. The Harrison 2-cochains are symmetric and are interpreted in terms of abelian *-products. This paper begins a study of abelian quantization on plane curves over mathbb{C}, being algebraic varieties of the form {mathbb{C}}^2/R, where R is a polynomial in two variables; that is, abelian deformations of the coordinate algebra mathbb{C}[x,y]/(R). To understand the connection between the singularities of a variety and cohomology we determine the algebraic Hochschild (co)homology and its Barr Gerstenhaber Schack decomposition. Homology is the same for all plane curves mathbb{C}[x,y]/R, but the cohomology depends on the local algebra of the singularity of R at the origin. The Appendix, by Maxim Kontsevich, explains in modern mathematical language a way to calculate Hochschild and Harrison cohomology groups for algebras of functions on singular planar curves etc. based on Koszul resolutions.
NASA Astrophysics Data System (ADS)
Chamidah, Nur; Rifada, Marisa
2016-03-01
There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.
Integrating the Levels of Person-Environment Fit: The Roles of Vocational Fit and Group Fit
ERIC Educational Resources Information Center
Vogel, Ryan M.; Feldman, Daniel C.
2009-01-01
Previous research on fit has largely focused on person-organization (P-O) fit and person-job (P-J) fit. However, little research has examined the interplay of person-vocation (P-V) fit and person-group (P-G) fit with P-O fit and P-J fit in the same study. This article advances the fit literature by examining these relationships with data collected…
... exercise strategy to improve health and fitness? Applied Physiology Nutrition and Metabolism. 2014;39:409. Gibala MJ, ... interval training in health and disease. Journal of Physiology. 2012;590:1077. Aug. 06, 2016 Original article: ...
Exponentially fitted symplectic integrator
NASA Astrophysics Data System (ADS)
Simos, T. E.; Vigo-Aguiar, Jesus
2003-01-01
In this paper a procedure for constructing efficient symplectic integrators for Hamiltonian problems is introduced. This procedure is based on the combination of the exponential fitting technique and symplecticness conditions. Based on this procedure, a simple modified Runge-Kutta-Nyström second-order algebraic exponentially fitted method is developed. We give explicitly the symplecticness conditions for the modified Runge-Kutta-Nyström method. We also give the exponential fitting and trigonometric fitting conditions. Numerical results indicate that the present method is much more efficient than the “classical” symplectic Runge-Kutta-Nyström second-order algebraic method introduced by M.P. Calvo and J.M. Sanz-Serna [J. Sci. Comput. (USA) 14, 1237 (1993)]. We note that the present procedure is appropriate for all near-unimodal systems.
... Increase your chances of living longer Fitting regular exercise into your daily schedule may seem difficult at ... fine. The key is to find the right exercise for you. It should be fun and should ...
... 2011 -- Exercise for Special Populations 2011 -- Behavior Change & Exercise Adherence 2011 -- Nutrition 2011 -- Winter Health 2010 -- Healthy Aging 2010 -- Weight Loss & Weight Management 2010 -- Fitness Assessment & Injury Prevention 2009 -- Strength Training 2009 -- Menopause ...
Shoes should be comfortable and fit well when you buy them. Never buy shoes that are tight, hoping they will stretch as ... damage, people with diabetes may not feel a shoe rubbing against the skin of their foot. Blisters ...
NASA Technical Reports Server (NTRS)
1993-01-01
NASA Langley recognizes the importance of healthy employees by committing itself to offering a complete fitness program. The scope of the program focuses on promoting overall health and wellness in an effort to reduce the risks of illness and disease and to increase productivity. This is accomplished through a comprehensive Health and Fitness Program offered to all NASA employees. Various aspects of the program are discussed.
Walpola, Ramesh L; Fois, Romano A; McLachlan, Andrew J; Chen, Timothy F
2015-01-01
Objective Despite the recognition that educating healthcare students in patient safety is essential, changing already full curricula can be challenging. Furthermore, institutions may lack the capacity and capability to deliver patient safety education, particularly from the start of professional practice studies. Using senior students as peer educators to deliver practice-based education can potentially overcome some of the contextual barriers in training junior students. Therefore, this study aimed to evaluate the effectiveness of a peer-led patient safety education programme for junior pharmacy students. Design A repeat cross-sectional design utilising a previously validated patient safety attitudinal survey was used to evaluate attitudes prior to, immediately after and 1 month after the delivery of a patient safety education programme. Latent growth curve (LGC) modelling was used to evaluate the change in attitudes of first-year students using second-year students as a comparator group. Setting Undergraduate university students in Sydney, Australia. Participants 175 first-year and 140 second-year students enrolled in the Bachelor of Pharmacy programme at the University of Sydney. Intervention An introductory patient safety programme was implemented into the first-year Bachelor of Pharmacy curriculum at the University of Sydney. The programme covered introductory patient safety topics including teamwork, communication skills, systems thinking and open disclosure. The programme consisted of 2 lectures, delivered by a senior academic, and a workshop delivered by trained final-year pharmacy students. Results A full LGC model was constructed including the intervention as a non-time-dependent predictor of change (χ2 (51)=164.070, root mean square error of approximation=0.084, comparative fit index=0.913, standardised root mean square=0.056). First-year students’ attitudes significantly improved as a result of the intervention, particularly in relation to
Curved PVDF airborne transducer.
Wang, H; Toda, M
1999-01-01
In the application of airborne ultrasonic ranging measurement, a partially cylindrical (curved) PVDF transducer can effectively couple ultrasound into the air and generate strong sound pressure. Because of its geometrical features, the ultrasound beam angles of a curved PVDF transducer can be unsymmetrical (i.e., broad horizontally and narrow vertically). This feature is desired in some applications. In this work, a curved PVDF air transducer is investigated both theoretically and experimentally. Two resonances were observed in this transducer. They are length extensional mode and flexural bending mode. Surface vibration profiles of these two modes were measured by a laser vibrometer. It was found from the experiment that the surface vibration was not uniform along the curvature direction for both vibration modes. Theoretical calculations based on a model developed in this work confirmed the experimental results. Two displacement peaks were found in the piezoelectric active direction of PVDF film for the length extensional mode; three peaks were found for the flexural bending mode. The observed peak positions were in good agreement with the calculation results. Transient surface displacement measurements revealed that vibration peaks were in phase for the length extensional mode and out of phase for the flexural bending mode. Therefore, the length extensional mode can generate a stronger ultrasound wave than the flexural bending mode. The resonance frequencies and vibration amplitudes of the two modes strongly depend on the structure parameters as well as the material properties. For the transducer design, the theoretical model developed in this work can be used to optimize the ultrasound performance.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Complementary Curves of Descent
2012-11-16
provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid...curves of descent 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) US Naval Academy,Physics Department,Annapolis,MD,21402-1363 8. PERFORMING ORGANIZATION
NASA Technical Reports Server (NTRS)
Davis, R. C.; Bales, T. T.; Royster, D. M.; Jackson, L. R. (Inventor)
1984-01-01
The report describes a structure for a strong, lightweight corrugated sheet. The sheet is planar or curved and includes a plurality of corrugation segments, each segment being comprised of a generally U-shaped corrugation with a part-cylindrical crown and cap strip, and straight side walls and with secondary corrugations oriented at right angles to said side walls. The cap strip is bonded to the crown and the longitudinal edge of said cap strip extends beyond edge at the intersection between said crown and said side walls. The high strength relative to weight of the structure makes it desirable for use in aircraft or spacecraft.
[Bayesian statistic: an approach fitted to clinic].
Meyer, N; Vinzio, S; Goichot, B
2009-03-01
Bayesian statistic has known a growing success though quite limited. This is surprising since Bayes' theorem on which this paradigm relies is frequently used by the clinicians. There is a direct link between the routine diagnostic test and the Bayesian statistic. This link is the Bayes' theorem which allows one to compute positive and negative predictive values of a test. The principle of this theorem is extended to simple statistical situations as an introduction to Bayesian statistic. The conceptual simplicity of Bayesian statistic should make for a greater acceptance in the biomedical world.
Mathematics Difficulties: Does One Approach Fit All?
ERIC Educational Resources Information Center
Gifford, Sue; Rockliffe, Freda
2012-01-01
This article reviews the nature of learning difficulties in mathematics and, in particular, the nature and prevalence of dyscalculia, a condition that affects the acquisition of arithmetical skills. The evidence reviewed suggests that younger children (under the age of 10) often display a combination of problems, including minor physical…
Geometric Observers for Dynamically Evolving Curves
Niethammer, Marc; Vela, Patricio A.; Tannenbaum, Allen
2009-01-01
This paper proposes a deterministic observer design for visual tracking based on nonparametric implicit (level-set) curve descriptions. The observer is continuous discrete with continuous-time system dynamics and discrete-time measurements. Its state-space consists of an estimated curve position augmented by additional states (e.g., velocities) associated with every point on the estimated curve. Multiple simulation models are proposed for state prediction. Measurements are performed through standard static segmentation algorithms and optical-flow computations. Special emphasis is given to the geometric formulation of the overall dynamical system. The discrete-time measurements lead to the problem of geometric curve interpolation and the discrete-time filtering of quantities propagated along with the estimated curve. Interpolation and filtering are intimately linked to the correspondence problem between curves. Correspondences are established by a Laplace-equation approach. The proposed scheme is implemented completely implicitly (by Eulerian numerical solutions of transport equations) and thus naturally allows for topological changes and subpixel accuracy on the computational grid. PMID:18421113
Transforming Curves into Curves with the Same Shape.
ERIC Educational Resources Information Center
Levine, Michael V.
Curves are considered to have the same shape when they are related by a similarity transformation of a certain kind. This paper extends earlier work on parallel curves to curves with the same shape. Some examples are given more or less explicitly. A generalization is used to show that the theory is ordinal and to show how the theory may be applied…
NASA Technical Reports Server (NTRS)
Pratt, Randy
1993-01-01
The Ames Fitness Program services 5,000 civil servants and contractors working at Ames Research Center. A 3,000 square foot fitness center, equipped with cardiovascular machines, weight training machines, and free weight equipment is on site. Thirty exercise classes are held each week at the Center. A weight loss program is offered, including individual exercise prescriptions, fitness testing, and organized monthly runs. The Fitness Center is staffed by one full-time program coordinator and 15 hours per week of part-time help. Membership is available to all employees at Ames at no charge, and there are no fees for participation in any of the program activities. Prior to using the Center, employees must obtain a physical examination and complete a membership package. Funding for the Ames Fitness Program was in jeopardy in December 1992; however, the employees circulated a petition in support of the program and collected more than 1500 signatures in only three days. Funding has been approved through October 1993.
Growth curve prediction from optical density data.
Mytilinaios, I; Salih, M; Schofield, H K; Lambert, R J W
2012-03-15
A fundamental aspect of predictive microbiology is the shape of the microbial growth curve and many models are used to fit microbial count data, the modified Gompertz and Baranyi equation being two of the most widely used. Rapid, automated methods such as turbidimetry have been widely used to obtain growth parameters, but do not directly give the microbial growth curve. Optical density (OD) data can be used to obtain the specific growth rate and if used in conjunction with the known initial inocula, the maximum population data and knowledge of the microbial number at a predefined OD at a known time then all the information required for the reconstruction of a standard growth curve can be obtained. Using multiple initial inocula the times to detection (TTD) at a given standard OD were obtained from which the specific growth rate was calculated. The modified logistic, modified Gompertz, 3-phase linear, Baranyi and the classical logistic model (with or without lag) were fitted to the TTD data. In all cases the modified logistic and modified Gompertz failed to reproduce the observed linear plots of the log initial inocula against TTD using the known parameters (initial inoculum, MPD and growth rate). The 3 phase linear model (3PLM), Baranyi and classical logistic models fitted the observed data and were able to reproduce elements of the OD incubation-time curves. Using a calibration curve relating OD and microbial numbers, the Baranyi equation was able to reproduce OD data obtained for Listeria monocytogenes at 37 and 30°C as well as data on the effect of pH (range 7.05 to 3.46) at 30°C. The Baranyi model was found to be the most capable primary model of those examined (in the absence of lag it defaults to the classic logistic model). The results suggested that the modified logistic and the modified Gompertz models should not be used as Primary models for TTD data as they cannot reproduce the observed data.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Rotation curve for the Milky Way galaxy in conformal gravity
NASA Astrophysics Data System (ADS)
O'Brien, James G.; Moss, Robert J.
2015-05-01
Galactic rotation curves have proven to be the testing ground for dark matter bounds in galaxies, and our own Milky Way is one of many large spiral galaxies that must follow the same models. Over the last decade, the rotation of the Milky Way galaxy has been studied and extended by many authors. Since the work of conformal gravity has now successfully fit the rotation curves of almost 140 galaxies, we present here the fit to our own Milky Way. However, the Milky Way is not just an ordinary galaxy to append to our list, but instead provides a robust test of a fundamental difference of conformal gravity rotation curves versus standard cold dark matter models. It was shown by Mannheim and O'Brien that in conformal gravity, the presence of a quadratic potential causes the rotation curve to eventually fall off after its flat portion. This effect can currently be seen in only a select few galaxies whose rotation curve is studied well beyond a few multiples of the optical galactic scale length. Due to the recent work of Sofue et al and Kundu et al, the rotation curve of the Milky Way has now been studied to a degree where we can test the predicted fall off in the conformal gravity rotation curve. We find that - like the other galaxies already studied in conformal gravity - we obtain amazing agreement with rotational data and the prediction includes the eventual fall off at large distances from the galactic center.
What Current Research Tells Us About Physical Fitness for Children.
ERIC Educational Resources Information Center
Cundiff, David E.
The author distinguishes between the terms "physical fitness" and "motor performance," summarizes the health and physical status of adults, surveys the physical fitness status of children, and proposes a lifestyle approach to the development and lifetime maintenance of health and physical fitness. The distinctions between…
Aerobic Fitness for the Severely and Profoundly Mentally Retarded.
ERIC Educational Resources Information Center
Bauer, Dan
1981-01-01
The booklet discusses the aerobic fitness capacities of severely/profoundly retarded students and discusses approaches for improving their fitness. An initial section describes a method for determining the student's present fitness level on the basis of computations of height, weight, blood pressure, resting pulse, and Barach Index and Crampton…
A Bayesian beta distribution model for estimating rainfall IDF curves in a changing climate
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Kwon, Hyun-Han; Kim, Jin-Young
2016-09-01
The estimation of intensity-duration-frequency (IDF) curves for rainfall data comprises a classical task in hydrology studies to support a variety of water resources projects, including urban drainage and the design of flood control structures. In a changing climate, however, traditional approaches based on historical records of rainfall and on the stationary assumption can be inadequate and lead to poor estimates of rainfall intensity quantiles. Climate change scenarios built on General Circulation Models offer a way to access and estimate future changes in spatial and temporal rainfall patterns at the daily scale at the utmost, which is not as fine temporal resolution as required (e.g. hours) to directly estimate IDF curves. In this paper we propose a novel methodology based on a four-parameter beta distribution to estimate IDF curves conditioned on the observed (or simulated) daily rainfall, which becomes the time-varying upper bound of the updated nonstationary beta distribution. The inference is conducted in a Bayesian framework that provides a better way to take into account the uncertainty in the model parameters when building the IDF curves. The proposed model is tested using rainfall data from four stations located in South Korea and projected climate change Representative Concentration Pathways (RCPs) scenarios 6 and 8.5 from the Met Office Hadley Centre HadGEM3-RA model. The results show that the developed model fits the historical data as good as the traditional Generalized Extreme Value (GEV) distribution but is able to produce future IDF curves that significantly differ from the historically based IDF curves. The proposed model predicts for the stations and RCPs scenarios analysed in this work an increase in the intensity of extreme rainfalls of short duration with long return periods.
NASA Astrophysics Data System (ADS)
Levay, Z. G.
2004-12-01
A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.
The Characteristic Curves of Water
NASA Astrophysics Data System (ADS)
Neumaier, Arnold; Deiters, Ulrich K.
2016-09-01
In 1960, E. H. Brown defined a set of characteristic curves (also known as ideal curves) of pure fluids, along which some thermodynamic properties match those of an ideal gas. These curves are used for testing the extrapolation behaviour of equations of state. This work is revisited, and an elegant representation of the first-order characteristic curves as level curves of a master function is proposed. It is shown that Brown's postulate—that these curves are unique and dome-shaped in a double-logarithmic p, T representation—may fail for fluids exhibiting a density anomaly. A careful study of the Amagat curve (Joule inversion curve) generated from the IAPWS-95 reference equation of state for water reveals the existence of an additional branch.
Strength Training: For Overall Fitness
Healthy Lifestyle Fitness Strength training is an important part of an overall fitness program. Here's what strength training can do for ... is a key component of overall health and fitness for everyone. Lean muscle mass naturally diminishes with ...
ERIC Educational Resources Information Center
Khonsari, Michael M.; Horn, Douglas
1990-01-01
An algorithm is described for generating smooth curves of first-order continuity. The algorithm is composed of several cubic Bezier curves joined together at the user defined control points. Introduced is a tension control parameter which can be set thus providing additional flexibility in the design of free-form curves. (KR)
Titration Curves: Fact and Fiction.
ERIC Educational Resources Information Center
Chamberlain, John
1997-01-01
Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…
Linking the Fits, Fitting the Links: Connecting Different Types of PO Fit to Attitudinal Outcomes
ERIC Educational Resources Information Center
Leung, Aegean; Chaturvedi, Sankalp
2011-01-01
In this paper we explore the linkages among various types of person-organization (PO) fit and their effects on employee attitudinal outcomes. We propose and test a conceptual model which links various types of fits--objective fit, perceived fit and subjective fit--in a hierarchical order of cognitive information processing and relate them to…
NASA Astrophysics Data System (ADS)
De Paola, Francesco; Giugni, Maurizio; Topa, Maria Elena; Coly, Adrien; Yeshitela, Kumelachew; Kombe, Wilbard; Tonye, Emmanuel; Touré, Hamidou
2013-04-01
The intensity-duration-frequency curves are used in hydrology to express in a synthetic way, the link between the maximum rainfall height h and a generic duration d of a rainfall event, fixed a given return period T. Generally, IDF curves can be characterized by a bi-parameter power law: h(d,T) = a(T)dn where a(T), and n are the parameters that have to be estimated through a probabilistic approach. An intensity-duration-frequency analysis starts by gathering time series record of different durations and extracting annual extremes for each duration. The annual extreme data are then fitted by a probability distribution. The present study, carried out within the FP7-ENV-2010 CLUVA project (CLimate change and Urban Vulnerability in Africa), regards the evaluation of the IDF curves for five case studies: Addis Ababa (Ethiopia), Dar Es Salaam (Tanzania), Douala (Cameroon), Ouagadouogou (Burkina Faso) and Saint Louis (Senegal). The probability distribution chosen to fit the annual extreme data is the classic Gumbel distribution. However, for the case studies, only the maximum annual daily rainfall heights are available. Therefore, to define the IDF curves and the extreme values in a smaller time window (10', 30', 1h, 3h, 6h, 12h), it is required to develop disaggregation techniques of the collected data, in order to generate a synthetic sequence of rainfall, with statistical properties equal to the recorded data. The daily rainfalls were disaggregated using two models: short-time intensity disaggregation model (10', 30', 1h); cascade-based disaggregation model (3h, 6h, 12h). On the basis of disaggegation models and Gumbel distribution , the parameters of the IDF curves for the five test cities were evaluated. In order to estimate the contingent influence of climate change on the IDF curves, the illustrated procedure has been applied to the climate (rainfall) simulations over the time period 2010-2050 provided by the CMCC (Centro Euro-Mediterraneo sui Cambiamenti Climatici
Quantum relative Lorenz curves
NASA Astrophysics Data System (ADS)
Buscemi, Francesco; Gour, Gilad
2017-01-01
The theory of majorization and its variants, including thermomajorization, have been found to play a central role in the formulation of many physical resource theories, ranging from entanglement theory to quantum thermodynamics. Here we formulate the framework of quantum relative Lorenz curves, and show how it is able to unify majorization, thermomajorization, and their noncommutative analogs. In doing so, we define the family of Hilbert α divergences and show how it relates with other divergences used in quantum information theory. We then apply these tools to the problem of deciding the existence of a suitable transformation from an initial pair of quantum states to a final one, focusing in particular on applications to the resource theory of athermality, a precursor of quantum thermodynamics.
Multipulse phase resetting curves
NASA Astrophysics Data System (ADS)
Krishnan, Giri P.; Bazhenov, Maxim; Pikovsky, Arkady
2013-10-01
In this paper, we introduce and study systematically, in terms of phase response curves, the effect of dual-pulse excitation on the dynamics of an autonomous oscillator. Specifically, we test the deviations from linear summation of phase advances resulting from two small perturbations. We analytically derive a correction term, which generally appears for oscillators whose intrinsic dimensionality is >1. The nonlinear correction term is found to be proportional to the square of the perturbation. We demonstrate this effect in the Stuart-Landau model and in various higher dimensional neuronal models. This deviation from the superposition principle needs to be taken into account in studies of networks of pulse-coupled oscillators. Further, this deviation could be used in the verification of oscillator models via a dual-pulse excitation.
ERIC Educational Resources Information Center
Casey, Stephanie A.
2016-01-01
Statistical association between two variables is one of the fundamental statistical ideas in school curricula. Reasoning about statistical association has been deemed one of the most important cognitive activities that humans perform. Students are typically introduced to statistical association through the study of the line of best fit because it…
ERIC Educational Resources Information Center
Grossman, Pam; Davis, Emily
2012-01-01
Beginning teachers enter the classroom with diverse backgrounds, training, expectations, and needs. Yet too often, write the authors, induction programs resemble a one-size-fits-all poncho rather than a well-tailored coat. Reviewing the research, the authors write that high-quality mentors, a focus on improving instruction, and allocated time are…
ERIC Educational Resources Information Center
Dixon-Watmough, Rebecca; Keogh, Brenda; Naylor, Stuart
2012-01-01
For some time the Association for Science Education (ASE) has been aware that it would be useful to have some resources available to get children talking and thinking about issues related to health, sport and fitness. Some of the questions about pulse, breathing rate and so on are pretty obvious to everyone, and there is a risk of these being…
Teaching Aerobic Fitness Concepts.
ERIC Educational Resources Information Center
Sander, Allan N.; Ratliffe, Tom
2002-01-01
Discusses how to teach aerobic fitness concepts to elementary students. Some of the K-2 activities include location, size, and purpose of the heart and lungs; the exercise pulse; respiration rate; and activities to measure aerobic endurance. Some of the 3-6 activities include: definition of aerobic endurance; heart disease risk factors;…
NASA Technical Reports Server (NTRS)
Coleman, A. E.
1981-01-01
Training manual used for preflight conditioning of NASA astronauts is written for audience with diverse backgrounds and interests. It suggests programs for various levels of fitness, including sample starter programs, safe progression schedules, and stretching exercises. Related information on equipment needs, environmental coonsiderations, and precautions can help readers design safe and effective running programs.
ERIC Educational Resources Information Center
Maione, Mary Jane
A description is given of a program that provides preventive measures to check obesity in children and young people. The 24-week program is divided into two parts--a nutrition component and an exercise component. At the start and end of the program, tests are given to assess the participants' height, weight, body composition, fitness level, and…
ERIC Educational Resources Information Center
Donovan, Edward P.
The major objective of this module is to help students understand how water from a source such as a lake is treated to make it fit to drink. The module, consisting of five major activities and a test, is patterned after Individualized Science Instructional System (ISIS) modules. The first activity (Planning) consists of a brief introduction and a…
Directory of Fitness Certifications.
ERIC Educational Resources Information Center
Parks, Janet B.
1990-01-01
This article discusses the need for certification of fitness instructors in the aerobic dance/dance-exercise industry and presents results of a survey of 18 agencies that certify instructors. Survey data has been compiled and published. An excerpt is included which lists organizations, training, certification and renewal procedures, publications,…
ERIC Educational Resources Information Center
Vail, Kathleen
1999-01-01
Children who hate gym grow into adults who associate physical activity with ridicule and humiliation. Physical education is reinventing itself, stressing enjoyable activities that continue into adulthood: aerobic dance, weight training, fitness walking, mountain biking, hiking, inline skating, karate, rock-climbing, and canoeing. Cooperative,…
Manitoba Schools Fitness 1989.
ERIC Educational Resources Information Center
Manitoba Dept. of Education, Winnipeg.
This manual outlines physical fitness tests that may be used in the schools. The tests are based on criterion standards which indicate the levels of achievement at which health risk factors may be reduced. Test theory, protocols, and criterion charts are presented for: (1) muscle strength and endurance, (2) body composition, (3) flexibility, and…
Ramsay Curve IRT for Likert-Type Data
ERIC Educational Resources Information Center
Woods, Carol M.
2007-01-01
Ramsay curve item response theory (RC-IRT) was recently developed to detect and correct for nonnormal latent variables when unidimensional IRT models are fitted to data using maximum marginal likelihood estimation. The purpose of this research is to evaluate the performance of RC-IRT for Likert-type item responses with varying test lengths, sample…
Searching the clinical fitness landscape.
Eppstein, Margaret J; Horbar, Jeffrey D; Buzas, Jeffrey S; Kauffman, Stuart A
2012-01-01
Widespread unexplained variations in clinical practices and patient outcomes suggest major opportunities for improving the quality and safety of medical care. However, there is little consensus regarding how to best identify and disseminate healthcare improvements and a dearth of theory to guide the debate. Many consider multicenter randomized controlled trials to be the gold standard of evidence-based medicine, although results are often inconclusive or may not be generally applicable due to differences in the contexts within which care is provided. Increasingly, others advocate the use "quality improvement collaboratives", in which multi-institutional teams share information to identify potentially better practices that are subsequently evaluated in the local contexts of specific institutions, but there is concern that such collaborative learning approaches lack the statistical rigor of randomized trials. Using an agent-based model, we show how and why a collaborative learning approach almost invariably leads to greater improvements in expected patient outcomes than more traditional approaches in searching simulated clinical fitness landscapes. This is due to a combination of greater statistical power and more context-dependent evaluation of treatments, especially in complex terrains where some combinations of practices may interact in affecting outcomes. The results of our simulations are consistent with observed limitations of randomized controlled trials and provide important insights into probable reasons for effectiveness of quality improvement collaboratives in the complex socio-technical environments of healthcare institutions. Our approach illustrates how modeling the evolution of medical practice as search on a clinical fitness landscape can aid in identifying and understanding strategies for improving the quality and safety of medical care.
ERIC Educational Resources Information Center
Meredith, Sydney
This book presents a holistic approach to fitness, opens channels of information, and identifies useful resources and references. After an introduction to the subject of health related fitness, the book presents 12 chapters answering the following questions: (1) Why Not Be Fit? (2) What Is Physical Fitness? (3) Why Participate in Fitness? (4) What…
Automated reasoning about cubic curves.
Padmanabhan, R.; McCune, W.; Mathematics and Computer Science; Univ. of Manitoba
1995-01-01
It is well known that the n-ary morphisms defined on projective algebraic curves satisfy some strong local-to-global equational rules of derivation not satisfied in general by universal algebras. For example, every rationally defined group law on a cubic curve must be commutative. Here we extract from the geometry of curves a first order property (gL) satisfied by all morphisms defined on these curves such that the equational consequences known for projective curves can be derived automatically from a set of six rules (stated within the first-order logic with equality). First, the rule (gL) is implemented in the theorem-proving program Otter. Then we use Otter to automatically prove some incidence theorems on projective curves without any further reference to the underlying geometry or topology of the curves.
New NIR light-curve templates for classical Cepheids
NASA Astrophysics Data System (ADS)
Inno, L.; Matsunaga, N.; Romaniello, M.; Bono, G.; Monson, A.; Ferraro, I.; Iannicola, G.; Persson, E.; Buonanno, R.; Freedman, W.; Gieren, W.; Groenewegen, M. A. T.; Ita, Y.; Laney, C. D.; Lemasle, B.; Madore, B. F.; Nagayama, T.; Nakada, Y.; Nonino, M.; Pietrzyński, G.; Primas, F.; Scowcroft, V.; Soszyński, I.; Tanabé, T.; Udalski, A.
2015-04-01
Aims: We present new near-infrared (NIR) light-curve templates for fundamental (FU, J, H, KS) and first overtone (FO, J) classical Cepheids. The new templates together with period-luminosity and period-Wesenheit (PW) relations provide Cepheid distances from single-epoch observations with a precision only limited by the intrinsic accuracy of the method adopted. Methods: The templates rely on a very large set of Galactic and Magellanic Cloud Cepheids (FU, ~600; FO, ~200) with well-sampled NIR (IRSF data set) and optical (V, I; OGLE data set) light-curves. To properly trace the change in the shape of the light-curve as a function of pulsation period, we split the sample of calibrating Cepheids into ten different period bins. The templates for the first time cover FO Cepheids and the short-period range of FU Cepheids (P ≤ 5 days). Moreover, the phase zero-point is anchored to the phase of the mean magnitude along the rising branch. The new approach has several advantages in sampling the light-curve of bump Cepheids when compared with the canonical phase of maximum light. We also provide new empirical estimates of the NIR-to-optical amplitude ratios for FU and FO Cepheids. We perform detailed analytical fits using seventh-order Fourier series and multi-Gaussian periodic functions. The latter are characterized by fewer free parameters (nine vs. fifteen). Results: The mean NIR magnitudes based on the new templates are up to 80% more accurate than single-epoch NIR measurements and up to 50% more accurate than the mean magnitudes based on previous NIR templates, with typical associated uncertainties ranging from 0.015 mag (J band) to 0.019 mag (KS band). Moreover, we find that errors on individual distance estimates for Small Magellanic Cloud Cepheids derived from NIR PW relations are essentially reduced to the intrinsic scatter of the adopted relations. Conclusions: Thus, the new templates are the ultimate tool for estimating precise Cepheid distances from NIR single
Observational evidence of dust evolution in galactic extinction curves
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo; Zonca, Alberto E-mail: silvia@oa-cagliari.inaf.it E-mail: azonca@oa-cagliari.inaf.it
2014-04-10
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds. Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.
Birational maps that send biquadratic curves to biquadratic curves
NASA Astrophysics Data System (ADS)
Roberts, John A. G.; Jogia, Danesh
2015-02-01
Recently, many papers have begun to consider so-called non-Quispel-Roberts-Thompson (QRT) birational maps of the plane. Compared to the QRT family of maps which preserve each biquadratic curve in a fibration of the plane, non-QRT maps send a biquadratic curve to another biquadratic curve belonging to the same fibration or to a biquadratic curve from a different fibration of the plane. In this communication, we give the general form of a birational map derived from a difference equation that sends a biquadratic curve to another. The necessary and sufficient condition for such a map to exist is that the discriminants of the two biquadratic curves are the same (and hence so are the j-invariants). The result allows existing examples in the literature to be better understood and allows some statements to be made concerning their generality.
Tracing personalized health curves during infections.
Schneider, David S
2011-09-01
It is difficult to describe host-microbe interactions in a manner that deals well with both pathogens and mutualists. Perhaps a way can be found using an ecological definition of tolerance, where tolerance is defined as the dose response curve of health versus parasite load. To plot tolerance, individual infections are summarized by reporting the maximum parasite load and the minimum health for a population of infected individuals and the slope of the resulting curve defines the tolerance of the population. We can borrow this method of plotting health versus microbe load in a population and make it apply to individuals; instead of plotting just one point that summarizes an infection in an individual, we can plot the values at many time points over the course of an infection for one individual. This produces curves that trace the course of an infection through phase space rather than over a more typical timeline. These curves highlight relationships like recovery and point out bifurcations that are difficult to visualize with standard plotting techniques. Only nine archetypical curves are needed to describe most pathogenic and mutualistic host-microbe interactions. The technique holds promise as both a qualitative and quantitative approach to dissect host-microbe interactions of all kinds.