ERIC Educational Resources Information Center
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Tasel, Serdar F.; Hassanpour, Reza; Mumcuoglu, Erkan U.; Perkins, Guy C.; Martone, Maryann
2014-03-01
Mitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of nonmitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme. Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
NASA Astrophysics Data System (ADS)
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species
Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R
2012-02-01
The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Least-squares fitting Gompertz curve
NASA Astrophysics Data System (ADS)
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Modeling and Fitting Exoplanet Transit Light Curves
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, G. T.
2013-01-01
We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the
Multivariate curve-fitting in GAUSS
Bunck, C.M.; Pendleton, G.W.
1988-01-01
Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.
Fitting Polynomial Equations to Curves and Surfaces
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.; Tiffany, S. H.
1986-01-01
FIT is computer program for interactively determining least-squares polynomial equations that fit user-supplied data. Finds leastsquares fits for functions of two independent variables. Interactive graphical and editing capabilities in FIT enables user to control polynomial equations to be fitted to data arising from most practical applications. FIT written in FORTRAN and COMPASS.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Simplified curve fits for the transport properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.
1987-01-01
New, improved curve fits for the transport properties of equilibruim air have been developed. The curve fits are for viscosity and Prandtl number as functions of temperature and density, and viscosity and thermal conductivity as functions of internal energy and density. The curve fits were constructed using grabau-type transition functions to model the tranport properties of Peng and Pindroh. The resulting curve fits are sufficiently accurate and self-contained so that they can be readily incorporated into new or existing computational fluid dynamics codes. The range of validity of the new curve fits are temperatures up to 15,000 K densities from 10 to the -5 to 10 amagats (rho/rho sub o).
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm.
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-01-01
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer. PMID:26437414
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-01-01
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer. PMID:26437414
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Curve fitting for RHB Islamic Bank annual net profit
NASA Astrophysics Data System (ADS)
Nadarajan, Dineswary; Noor, Noor Fadiya Mohd
2015-05-01
The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.
Mössbauer spectral curve fitting combining fundamentally different techniques
NASA Astrophysics Data System (ADS)
Susanto, Ferry; de Souza, Paulo
2016-10-01
We propose the use of fundamentally distinctive techniques to solve the problem of curve fitting a Mössbauer spectrum. The techniques we investigated are: evolutionary algorithm, basin hopping, and hill climbing. These techniques were applied in isolation and combined to fit different shapes of Mössbauer spectra. The results indicate that complex Mössbauer spectra can be automatically curve fitted using minimum user input, and combination of these techniques achieved the best performance (lowest statistical error). The software and sample of Mössbauer spectra have been made available through a link at the reference.
Viscosity Coefficient Curve Fits for Ionized Gas Species Grant Palmer
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
Viscosity coefficient curve fits for neutral gas species are available from many sources. Many do a good job of reproducing experimental and computational chemistry data. The curve fits are usually expressed as a function of temperature only. This is consistent with the governing equations used to derive an expression for the neutral species viscosity coefficient. Ionized species pose a more complicated problem. They are subject to electrostatic as well as intermolecular forces. The electrostatic forces are affected by a shielding phenomenon where electrons shield the electrostatic forces of positively charged ions beyond a certain distance. The viscosity coefficient for an ionized gas species is a function of both temperature and local electron number density. Currently available curve fits for ionized gas species, such as those presented by Gupta/Yos, are a function of temperature only. What they did was to assume an electron number density. The problem is that the electron number density they assumed was unrealistically high. The purpose of this paper is two-fold. First, the proper expression for determining the viscosity coefficient of an ionized species as a function of both temperature and electron number density will be presented. Then curve fit coefficients will be developed using the more realistic assumption of an equilibrium electron number density. The results will be compared against previous curve fits and against highly accurate computational chemistry data.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
BGFit: management and automated fitting of biological growth curves
2013-01-01
Background Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. Results BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. Conclusions BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity. PMID:24067087
Appropriate calibration curve fitting in ligand binding assays.
Findlay, John W A; Dillard, Robert F
2007-06-29
Calibration curves for ligand binding assays are generally characterized by a nonlinear relationship between the mean response and the analyte concentration. Typically, the response exhibits a sigmoidal relationship with concentration. The currently accepted reference model for these calibration curves is the 4-parameter logistic (4-PL) model, which optimizes accuracy and precision over the maximum usable calibration range. Incorporation of weighting into the model requires additional effort but generally results in improved calibration curve performance. For calibration curves with some asymmetry, introduction of a fifth parameter (5-PL) may further improve the goodness of fit of the experimental data to the algorithm. Alternative models should be used with caution and with knowledge of the accuracy and precision performance of the model across the entire calibration range, but particularly at upper and lower analyte concentration areas, where the 4- and 5-PL algorithms generally outperform alternative models. Several assay design parameters, such as placement of calibrator concentrations across the selected range and assay layout on multiwell plates, should be considered, to enable optimal application of the 4- or 5-PL model. The fit of the experimental data to the model should be evaluated by assessment of agreement of nominal and model-predicted data for calibrators.
Double-mass curves; with a section fitting curves to cyclic data
Searcy, James K.; Hardison, Clayton H.; Langein, Walter B.
1960-01-01
The double.-mass curve is used to check the consistency of many kinds of hydrologic data by comparing data for a single station with that of a pattern composed of the data from several other stations in the area The double-mass curve can be used to adjust inconsistent precipitation data. The graph of the cumulative data of one variable versus the cumulative data of a related variable is a straight line so long as the relation between the variables is a fixed ratio. Breaks in the double-mass curve of such variables are caused by changes in the relation between the variables. These changes may be due to changes in the method of data collection or to physical changes that affect the relation. Applications of the double-mass curve to precipitation, streamflow, and sediment data, and to precipitation-runoff relations are described. A statistical test for significance of an apparent break in the slope of the double-mass curve is described by an example. Poor correlation between the variables can prevent detection of inconsistencies in a record, but an increase in the length of record tends to offset the effect of poor correlation. The residual-mass curve, which is a modification of the double-mass curve, magnifies imperceptible breaks in the double-mass curve for detailed study. Of the several methods of fitting a smooth curve to cyclic or periodic data, the moving-arc method and the double-integration method deserve greater use in hydrology. Both methods are described in this manual. The moving-arc method has general applicability, and the double integration method is useful in fitting a curve to cycles of sinusoidal form.
Caloric curves fitted by polytropic distributions in the HMF model
NASA Astrophysics Data System (ADS)
Campa, Alessandro; Chavanis, Pierre-Henri
2013-04-01
We perform direct numerical simulations of the Hamiltonian mean field (HMF) model starting from non-magnetized initial conditions with a velocity distribution that is (i) Gaussian; (ii) semi-elliptical, and (iii) waterbag. Below a critical energy E c , depending on the initial condition, this distribution is Vlasov dynamically unstable. The system undergoes a process of violent relaxation and quickly reaches a quasi-stationary state (QSS). We find that the distribution function of this QSS can be conveniently fitted by a polytrope with index (i) n = 2; (ii) n = 1; and (iii) n = 1/2. Using the values of these indices, we are able to determine the physical caloric curve T kin ( E) and explain the negative kinetic specific heat region C kin = dE/ d T kin < 0 observed in the numerical simulations. At low energies, we find that the system has a "core-halo" structure. The core corresponds to the pure polytrope discussed above but it is now surrounded by a halo of particles. In case (iii), we recover the "uniform" core-halo structure previously found by Pakter and Levin [Phys. Rev. Lett. 106, 200603 (2011)]. We also consider unsteady initial conditions with magnetization M 0 = 1 and isotropic waterbag velocity distribution and report the complex dynamics of the system creating phase space holes and dense filaments. We show that the kinetic caloric curve is approximately constant, corresponding to a polytrope with index n 0 ≃ 3.56 (we also mention the presence of an unexpected hump). Finally, we consider the collisional evolution of an initially Vlasov stable distribution, and show that the time-evolving distribution function f( θ,v,t) can be fitted by a sequence of polytropic distributions with a time-dependent index n( t) both in the non-magnetized and magnetized regimes. These numerical results show that polytropic distributions (also called Tsallis distributions) provide in many cases a good fit of the QSSs. They may even be the rule rather than the exception
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
Code System for Data Plotting and Curve Fitting.
2001-07-23
Version 00 PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semilog, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summarymore » of the information contained in the PLOTnFIT reference report.« less
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Lee, Hyun-Wook; Park, Hyoung-Jun; Lee, June-Ho; Song, Minho
2007-04-20
To improve measurement accuracy of spectrally distorted fiber Bragg grating temperature sensors, reflection profiles were curve fitted to Gaussian shapes, of which center positions were transformed into temperature information.By applying the Gaussian curve-fitting algorithm in a tunable bandpass filter demodulation scheme,{approx}0.3 deg. C temperature resolution was obtained with a severely distorted grating sensor, which was much better than that obtained using the highest peak search algorithm. A binary search was also used to retrieve the optimal fitting curves with the least amount of processing time.
Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python
NASA Astrophysics Data System (ADS)
Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew
2016-06-01
Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.
New approach to curved projective superspace
NASA Astrophysics Data System (ADS)
Butter, Daniel
2015-10-01
We present a new formulation of curved projective superspace. The 4 D N =2 supermanifold M4 |8 (four bosonic and eight Grassmann coordinates) is extended by an auxiliary SU(2) manifold, which involves introducing a vielbein and related connections on the full M7 |8=M4 |8×SU (2 ) . Constraints are chosen so that it is always possible to return to the central basis where the auxiliary SU(2) manifold largely decouples from the curved manifold M4 |8 describing 4 D N =2 conformal supergravity. We introduce the relevant projective superspace action principle in the analytic subspace of M7 |8 and construct its component reduction in terms of a five-form J living on M4×C , with C a contour in SU(2). This approach is inspired by and generalizes the original approach, which can be identified with a complexified version of the central gauge of the formulation presented here.
"asymptotic Parabola" FITS for Smoothing Generally Asymmetric Light Curves
NASA Astrophysics Data System (ADS)
Andrych, K. D.; Andronov, I. L.; Chinarova, L. L.; Marsakova, V. I.
A computer program is introduced, which allows to determine statistically optimal approximation using the "Asymptotic Parabola" fit, or, in other words, the spline consisting of polynomials of order 1,2,1, or two lines ("asymptotes") connected with a parabola. The function itself and its derivative is continuous. There are 5 parameters: two points, where a line switches to a parabola and vice versa, the slopes of the line and the curvature of the parabola. Extreme cases are either the parabola without lines (i.e.the parabola of width of the whole interval), or lines without a parabola (zero width of the parabola), or "line+parabola" without a second line. Such an approximation is especially effective for pulsating variables, for which the slopes of the ascending and descending branches are generally different, so the maxima and minima have asymmetric shapes. The method was initially introduced by Marsakova and Andronov (1996OAP.....9...127M) and realized as a computer program written in QBasic under DOS. It was used for dozens of variable stars, particularly, for the catalogs of the individual characteristics of pulsations of the Mira (1998OAP....11...79M) and semi-regular (200OAP....13..116C) pulsating variables. For the eclipsing variables with nearly symmetric shapes of the minima, we use a "symmetric" version of the "Asymptotic parabola". Here we introduce a Windows-based program, which does not have DOS limitation for the memory (number of observations) and screen resolution. The program has an user-friendly interface and is illustrated by an application to the test signal and to the pulsating variable AC Her.
Note: curve fit models for atomic force microscopy cantilever calibration in water.
Kennedy, Scott J; Cole, Daniel G; Clark, Robert L
2011-11-01
Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Investigating bias in the application of curve fitting programs to atmospheric time series
NASA Astrophysics Data System (ADS)
Pickers, P. A.; Manning, A. C.
2014-07-01
The decomposition of an atmospheric time series into its constituent parts is an essential tool for identifying and isolating variations of interest from a data set, and is widely used to obtain information about sources, sinks and trends in climatically important gases. Such procedures involve fitting appropriate mathematical functions to the data, however, it has been demonstrated that the application of such curve fitting procedures can introduce bias, and thus influence the scientific interpretation of the data sets. We investigate the potential for bias associated with the application of three curve fitting programs, known as HPspline, CCGCRV and STL, using CO2, CH4 and O3 data from three atmospheric monitoring field stations. These three curve fitting programs are widely used within the greenhouse gas measurement community to analyse atmospheric time series, but have not previously been compared extensively. The programs were rigorously tested for their ability to accurately represent the salient features of atmospheric time series, their ability to cope with outliers and gaps in the data, and for sensitivity to the values used for the input parameters needed for each program. We find that the programs can produce significantly different curve fits, and these curve fits can be dependent on the input parameters selected. There are notable differences between the results produced by the three programs for many of the decomposed components of the time series, such as the representation of seasonal cycle characteristics and the long-term growth rate. The programs also vary significantly in their response to gaps and outliers in the time series. Overall, we found that none of the three programs were superior, and that each program had its strengths and weaknesses. Thus, we provide a list of recommendations on the appropriate use of these three curve fitting programs for certain types of data sets, and for certain types of analyses and applications. In addition, we
Investigating bias in the application of curve fitting programs to atmospheric time series
NASA Astrophysics Data System (ADS)
Pickers, P. A.; Manning, A. C.
2015-03-01
The decomposition of an atmospheric time series into its constituent parts is an essential tool for identifying and isolating variations of interest from a data set, and is widely used to obtain information about sources, sinks and trends in climatically important gases. Such procedures involve fitting appropriate mathematical functions to the data. However, it has been demonstrated that the application of such curve fitting procedures can introduce bias, and thus influence the scientific interpretation of the data sets. We investigate the potential for bias associated with the application of three curve fitting programs, known as HPspline, CCGCRV and STL, using multi-year records of CO2, CH4 and O3 data from three atmospheric monitoring field stations. These three curve fitting programs are widely used within the greenhouse gas measurement community to analyse atmospheric time series, but have not previously been compared extensively. The programs were rigorously tested for their ability to accurately represent the salient features of atmospheric time series, their ability to cope with outliers and gaps in the data, and for sensitivity to the values used for the input parameters needed for each program. We find that the programs can produce significantly different curve fits, and these curve fits can be dependent on the input parameters selected. There are notable differences between the results produced by the three programs for many of the decomposed components of the time series, such as the representation of seasonal cycle characteristics and the long-term (multi-year) growth rate. The programs also vary significantly in their response to gaps and outliers in the time series. Overall, we found that none of the three programs were superior, and that each program had its strengths and weaknesses. Thus, we provide a list of recommendations on the appropriate use of these three curve fitting programs for certain types of data sets, and for certain types of
[A fitting power curve equation for the accumulative inhaling volume in Chinese under 19 years old].
Shang, Q; Zhou, H
2000-03-30
A fitting power curve equation based on the breath frequency and body weight for the accumulative inhaling volume in Chinese under 19 years old was established. The equation is Y = 754.37 + 258.34 X1.9038, and the fitting is good (R2 = 0.9974). It is useful for estimating the degree of exposure to air pollutants of people in their young period. PMID:12725097
An Algorithm for Obtaining Reliable Priors for Constrained-Curve Fits
Terrence Draper; Shao-Jing Dong; Ivan Horvath; Frank Lee; Nilmani Mathur; Jianbo Zhang
2004-03-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-chi-square fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. We illustrate the efficacy of the method with data from overlap fermions, on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx} 180 MeV.
Approaches to interventional fluoroscopic dose curves.
Wunderle, Kevin A; Rakowski, Joseph T; Dong, Frank F
2016-01-01
Modern fluoroscopes used for image-based guidance in interventional procedures are complex X-ray machines, with advanced image acquisition and processing systems capable of automatically controlling numerous parameters based on defined protocol settings. This study evaluated and compared approaches to technique factor modulation and air kerma rates in response to simulated patient thickness variations for four state-of-the-art and one previous-generation interventional fluoroscopes. A polymethyl methacrylate (PMMA) phantom was used as a tissue surrogate for the purposes of determining fluoroscopic reference plane air kerma rates, kVp, mA, and variable copper filter thickness over a wide range of simulated tissue thicknesses. Data were acquired for each fluoroscopic and acquisition dose curve within each vendor's default abdomen or body imaging protocol. The data obtained indicated vendor- and model-specific variations in the approach to technique factor modulation and reference plane air kerma rates across a range of tissue thicknesses. However, in the imaging protocol evaluated, all of the state-of-the-art systems had relatively low air kerma rates in the fluoroscopic low-dose imaging mode as compared to the previous-generation unit. Each of the newest-generation systems also employ Cu filtration within the selected protocol in the acquisition mode of imaging; this is a substantial benefit, reducing the skin entrance dose to the patient in the highest dose-rate mode of fluoroscope operation. Some vendors have also enhanced the radiation output capabilities of their fluoroscopes which, under specific conditions, may be beneficial; however, these increased output capabilities also have the potential to lead to unnecessarily high dose rates. Understanding how fluoroscopic technique factors are modulated provides insight into the vendor-specific image acquisition approach and may provide opportunities to optimize the imaging protocols for clinical practice. PMID
Baushke, Samuel W; Stedtfeld, Robert D; Tourlousse, Dieter M; Ahmad, Farhan; Wick, Lukas M; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2012-01-01
Non-equilibrium dissociation curves (NEDCs) have the potential to identify non-specific hybridizations on high throughput, diagnostic microarrays. We report a simple method for identification of non-specific signals by using a new parameter that does not rely on comparison of perfect match and mismatch dissociations. The parameter is the ratio of specific dissociation temperature (Td-w) to theoretical melting temperature (Tm) and can be obtained by automated fitting of a four-parameter, sigmoid, empirical equation to the thousands of curves generated in a typical experiment. The curves fit perfect match NEDCs from an initial experiment with an R2 of 0.998±0.006 and root mean square of 108±91 fluorescent units. Receiver operating characteristic curve analysis showed low temperature hybridization signals (20–48 °C) to be as effective as area under the curve as primary data filters. Evaluation of three datasets that target 16S rRNA and functional genes with varying degrees of target sequence similarity showed that filtering out hybridizations with Td-w/Tm < 0.78 greatly reduced false positive results. In conclusion, Td-w/Tm successfully screened many non-specific hybridizations that could not be identified using single temperature signal intensities alone, while the empirical modeling allowed a simplified approach to the high throughput analysis of thousands of NEDCs. PMID:22537822
Baushke, Samuel W; Stedtfeld, Robert D; Tourlousse, Dieter M; Ahmad, Farhan; Wick, Lukas M; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2012-07-01
Non-equilibrium dissociation curves (NEDCs) have the potential to identify non-specific hybridizations on high throughput, diagnostic microarrays. We report a simple method for the identification of non-specific signals by using a new parameter that does not rely on comparison of perfect match and mismatch dissociations. The parameter is the ratio of specific dissociation temperature (T(d-w)) to theoretical melting temperature (T(m)) and can be obtained by automated fitting of a four-parameter, sigmoid, empirical equation to the thousands of curves generated in a typical experiment. The curves fit perfect match NEDCs from an initial experiment with an R(2) of 0.998±0.006 and root mean square of 108±91 fluorescent units. Receiver operating characteristic curve analysis showed low temperature hybridization signals (20-48°C) to be as effective as area under the curve as primary data filters. Evaluation of three datasets that target 16S rRNA and functional genes with varying degrees of target sequence similarity showed that filtering out hybridizations with T(d-w)/T(m)<0.78 greatly reduced false positive results. In conclusion, T(d-w)/T(m) successfully screened many non-specific hybridizations that could not be identified using single temperature signal intensities alone, while the empirical modeling allowed a simplified approach to the high throughput analysis of thousands of NEDCs. PMID:22537822
Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis
ERIC Educational Resources Information Center
Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.
2010-01-01
The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo, Urbano
2000-01-01
Describes a program for computing different person-fit measures under different parametric item response models for binary items. The indexes can be computed for the Rasch model and the two- and three-parameter logistic models. The program can plot person response curves to allow the researchers to investigate the nonfitting response behavior of…
On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally
ERIC Educational Resources Information Center
Blozis, Shelley A.
2007-01-01
This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…
Evaluation of fatigue-crack growth rates by polynomial curve fitting. [Ti alloy plate
NASA Technical Reports Server (NTRS)
Davies, K. B.; Feddersen, C. E.
1973-01-01
Fundamental characterization of the constant-amplitude fatigue crack propagation is achieved by an analysis of the rate of change of crack length with change in number of applied loading cycles, defining the rate values such that they are consistent with the basic assumption of smoothness and continuity in the fatigue crack growth process. The technique used to satisfy the analytical conditions and minimize the effects of local material anomalies and experimental errors is that of fitting a smooth curve to the entire set of basic data by least square regression. This yields a well-behaved function relating the number of cycles to the crack length. By taking the first derivative of the function, the crack growth rate is obtained for each point. The class of curve fitting functions used in the analysis is the polynomial of degree n.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
A novel graph computation technique for multi-dimensional curve fitting
NASA Astrophysics Data System (ADS)
Motlagh, O.; Tang, S. H.; Maslan, M. N.; Azni Jafar, Fairul; Aziz, Maslita A.
2013-06-01
Curve-fitting problems are widely solved using numerical and soft techniques. In particular, artificial neural networks (ANN) are used to approximate arbitrary input-output relationships in the form of tuned edge weights. Moreover, using semantic networks such as fuzzy cognitive map (FCM), single graph nodes could be directly associated with their actual grey scales rather than binary values as in ANN. This article examines a novel methodology for automatic construction of FCMs for function approximation. The main contribution is the introduction of nested-FCM structure for multi-variable curve fitting. There are step-by-step example cases along with the obtained results to serve as a guide to the new methods being introduced. It is shown that nested FCM derives relationship models of multiple variables using any conventional weight training technique with minimal computation effort. Issues about computational cost and accuracy are also discussed along with future direction of the research.
Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers
NASA Astrophysics Data System (ADS)
Tananaev, N. I.
2015-03-01
Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Methodology for fast curve fitting to modulated Voigt dispersion lineshape functions
NASA Astrophysics Data System (ADS)
Westberg, Jonas; Wang, Junyang; Axner, Ove
2014-01-01
Faraday rotation spectroscopy (FAMOS) as well as other modulated techniques that rely on dispersion produce lock-in signals that are proportional to various Fourier coefficients of modulated dispersion lineshape functions of the molecular transition targeted. In order to enable real-time curve fitting to such signals a fast methodology for calculating the Fourier coefficients of modulated lineshape functions is needed. Although there exist an analytical expression for such Fourier coefficients of modulated Lorentzian absorption and dispersion lineshape functions, there is no corresponding expression for a modulated Voigt dispersion function. The conventional computational route of such Fourier coefficients has therefore so far either consisted of using various approximations to the modulated Voigt lineshape function or solving time-consuming integrals, which has precluded accurate real-time curve fitting. Here we present a new methodology to calculate Fourier coefficients of modulated Voigt dispersion lineshape functions that is significantly faster (several orders of magnitude) and more accurate than previous approximative calculation procedures, which allows for real-time curve fitting to FAMOS signals also in the Voigt regime.
An approach to fast fits of the unintegrated gluon density
Knutsson, Albert; Bacchetta, Alessandro; Kutak, Krzyzstof; Jung, Hannes
2009-01-01
An approach to fast fits of the unintegrated gluon density has been developed and used to determine the unintegrated gluon density by fits to deep inelastic scatting di-jet data from HERA. The fitting method is based on the determination of the parameter dependence by help of interpolating between grid points in the parameter-observable space before the actual fit is performed.
Estimating of equilibrium formation temperature by curve fitting method and it's problems
Kenso Takai; Masami Hyodo; Shinji Takasugi
1994-01-20
Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
A Healthy Approach to Fitness Center Security.
ERIC Educational Resources Information Center
Sturgeon, Julie
2000-01-01
Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)
Measurement of focused ultrasonic fields based on colour edge detection and curve fitting
NASA Astrophysics Data System (ADS)
Zhu, H.; Chang, S.; Yang, P.; He, L.
2016-03-01
This paper utilizes firstly both a scanning device and an optic fiber hydrophone to establish a measurement system, and then proposes the parameter measurement of the focused transducer based on edge detection of the visualized acoustic data and curve fitting. The measurement system consists of a water tank with wedge absorber, stepper motors driver, system controller, a focused transducer, an optic fiber hydrophone and data processing software. On the basis of the visualized processing for the original scanned data, the -3 dB beam width of the focused transducer is calculated using the edge detection of the acoustic visualized image and circle fitting method by minimizing algebraic distance. Experiments on the visualized ultrasound data are implemented to verify the feasibility of the proposed method. The data obtained from the scanning device are utilized to reconstruct acoustic fields, and it is found that the -3 dB beam width of the focused transducer can be predicted accurately.
Distribution temperature calculations by fitting the Planck radiation curve to a measured spectrum.
Andreić, Z
1992-01-01
A method of calculating distribution temperatures by numerically fitting Planck radiation curves to measured spectra is discussed. Numerically generated spectra were used to test the method and to determine the sensitivity to noise and the effects of linear emissivity changes. A comparison with the multiple-pair method of calculating color temperature as described in a previous paper [Appl. Opt. 27, 4073 (1988)] is presented. It was found that the method described here is ~ 2 times less sensitive to noise than the previously described method. Nonconstant emissivity (the linear model) produces the same effect on calculated distribution temperatures regardless of the calculating method.
Beynon, R J
1985-01-01
Software for non-linear curve fitting has been written in BASIC to execute on the British Broadcasting Corporation Microcomputer. The program uses the direct search algorithm Pattern-search, a robust algorithm that has the additional advantage of needing specification of the function without inclusion of the partial derivatives. Although less efficient than gradient methods, the program can be readily configured to solve low-dimensional optimization problems that are normally encountered in life sciences. In writing the software, emphasis has been placed upon the 'user interface' and making the most efficient use of the facilities provided by the minimal configuration of this system.
Calculating the parameters of full lightning impulses using model-based curve fitting
McComb, T.R.; Lagnese, J.E. )
1991-10-01
In this paper a brief review is presented of the techniques used for the evaluation of the parameters of high voltage impulses and the problems encountered. The determination of the best smooth curve through oscillations on a high voltage impulse is the major problem limiting the automatic processing of digital records of impulses. Non-linear regression, based on simple models, is applied to the analysis of simulated and experimental data of full lightning impulses. Results of model fitting to four different groups of impulses are presented and compared with some other methods. Plans for the extension of this work are outlined.
[Aging Process of Puer Black Tea Studied by FTIR Spectroscopy Combined with Curve-Fitting Analysis].
Li, Dong-yu; Shi, You-ming; Yi, Shi Lai
2015-07-01
For better determination of the chemical components in the Puer black tea, Fourier transform infrared spectroscopy was used for obtaining vibrational spectra of Puer black tea at different aging time. Fourier transform infrared (FTIR) spectra indicated that the chemical components had change in Puer black tea at different aging time. The leaf of Puer black tea was a complex system, its Fourier transform infrared spectrum showed a total overlap of each absorption spectrum of various components. Each band represented an overall overlap of some characteristic absorption peaks of functional groups in the Puer black tea. In order to explore the change of characteristic absorption peaks of functional groups with aging time, the prediction positions and the number of second peaks in the range of 1900-900 cm(-1) were determined by Fourier self-deconvolution at first, and later the curve fitting analysis was performed in this overlap band. At different aging time of Puer black tea, the wave number of second peaks of amide II, tea polyphenol, pectin and polysaccharides at overlap band were assigned by curve fitting analysis. The second peak at 1520 cm(-1) was characteristic absorption band of amide II, the second peaks of tea polyphenol and pectin appeared at 1278 and 1103 cm(-1) respectively. Two second peaks at 1063 and 1037 cm(-1), corresponds mainly to glucomannan and arabinan. The relative area of these second peaks could be indicated the content of protein, tea polyphenol, pectin and polysaccharides in the Puer black tea. The results of curve fitting analysis showed that the relative area of amide II was increasing first and then decreasing, it indicated the change of protein in Puer black tea. At the same time, the content of tea polyphenol and pectin were decreased with the increase of aging time, but the glucomannan and arabinan were increased in reverse. It explained that the bitter taste was become weak and a sweet taste appeared in the tea with the increase of
Curve fitting toxicity test data: Which comes first, the dose response or the model?
Gully, J.; Baird, R.; Bottomley, J.
1995-12-31
The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.
Calculations and curve fits of thermodynamic and transport properties for equilibrium air to 30000 K
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Lee, Kam-Pui; Thompson, Richard A.; Yos, Jerrold M.
1991-01-01
A self-consistent set of equilibrium air values were computed for enthalpy, total specific heat at constant pressure, compressibility factor, viscosity, total thermal conductivity, and total Prandtl number from 500 to 30,000 K over a range of 10(exp -4) atm to 10(exp 2) atm. The mixture values are calculated from the transport and thermodynamic properties of the individual species provided in a recent study by the authors. The concentrations of the individual species, required in the mixture relations, are obtained from a free energy minimization calculation procedure. Present calculations are based on an 11-species air model. For pressures less than 10(exp -2) atm and temperatures of about 15,000 K and greater, the concentrations of N(++) and O(++) become important, and consequently, they are included in the calculations determining the various properties. The computed properties are curve fitted as a function of temperature at a constant value of pressure. These curve fits reproduce the computed values within 5 percent for the entire temperature range considered here at specific pressures and provide an efficient means for computing the flowfield properties of equilibrium air, provided the elemental composition remains constant at 0.24 for oxygen and 0.76 for nitrogen by mass.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
Mujtaba, I.M.; Macchietto, S.
1997-06-01
A computationally efficient framework is presented for dynamic optimization of batch distillation where chemical reaction and separation take place simultaneously. An objective to maximize the conversion of the limiting reactant dynamic optimization problem (maximum conversion problem) is formulated for a representative system, and parametric solutions of the problem are obtained. Polynomial curve fitting techniques are then applied to the results of the dynamic optimization problem. These polynomials are used to formulate a nonlinear algebraic maximum profit problem which can be solved extremely efficiently using a nonlinear optimization solver. This provides an efficient framework which can be used for on-line optimization of batch distillation within scheduling programs for batch processes. The method can also be easily extended to nonreactive batch distillation and to nonconventional batch distillation columns.
Goovaerts, H G; Faes, T J; de Valk-de Roo, G W; ten Bolscher, M; Netelenbosch, J C; van der Vijgh, W J; Heethaar, R M
1998-11-01
In order to determine body fluid shifts between the intra- and extra-cellular spaces, multifrequency impedance measurement is performed. According to the Cole-Cole extrapolation, lumped values of intra- and extra-cellular conduction can be estimated which are commonly expressed in resistances Ri and Re respectively. For this purpose the magnitude and phase of the impedance under study are determined at a number of frequencies in the range between 5 kHz and 1 MHz. An approach to determine intra- and extra-cellular conduction on the basis of Bode analysis is presented in this article. On this basis, estimation of the ratio between intra- and extra-cellular conduction could be performed by phase measurement only, midrange in the bandwidth of interest. An important feature is that the relation between intra- and extra-cellular conduction can be continuously monitored by phase measurement and no curve fitting whatsoever is required. Based on a two frequency measurement determining Re at 4 kHz and phi(max) at 64 kHz it proved possible to estimate extra-cellular volume (ECV) more accurately compared with the estimation based on extrapolation according to the Cole-Cole model in 26 patients. Reference values of ECV were determined by sodium bromide. The results show a correlation of 0.90 with the reference method. The average error of ECV estimation was -3.6% (SD 8.4), whereas the Cole-Cole extrapolation showed an error of 13.2% (SD 9.5). An important feature of the proposed approach is that the relation between intra- and extra-cellular conduction can be continuously monitored by phase measurement and no curve fitting whatsoever is required.
Shadle, S E; Allen, D F; Guo, H; Pogozelski, W K; Bashkin, J S; Tullius, T D
1997-02-15
A computer program, GelExplorer, which uses a new methodology for obtaining quantitative information about electrophoresis has been developed. It provides a straightforward, easy-to-use graphical interface, and includes a number of features which offer significant advantages over existing methods for quantitative gel analysis. The method uses curve fitting with a nonlinear least-squares optimization to deconvolute overlapping bands. Unlike most curve fitting approaches, the data is treated in two dimensions, fitting all the data across the entire width of the lane. This allows for accurate determination of the intensities of individual, overlapping bands, and in particular allows imperfectly shaped bands to be accurately modeled. Experiments described in this paper demonstrate empirically that the Lorentzian lineshape reproduces the contours of an individual gel band and provides a better model than the Gaussian function for curve fitting of electrophoresis bands. Results from several fitting applications are presented and a discussion of the sources and magnitudes of uncertainties in the results is included. Finally, the method is applied to the quantitative analysis of a hydroxyl radical footprint titration experiment to obtain the free energy of binding of the lambda repressor protein to the OR1 operator DNA sequence.
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Dehon, G; Catoire, L; Duez, P; Bogaerts, P; Dubois, J
2008-02-29
In recent years, the single-cell gel electrophoresis (comet) assay has become a reference technique for the assessment of DNA fragmentation both in vitro and in vivo at the cellular level. In order to improve the throughput of genotoxicity screening, development of fully automated systems is clearly a must. This would allow us to increase processing time and to avoid subjectivity brought about by frequent manual settings required for the 'classical' analysis systems. To validate a fully automatic system developed in our laboratory, different experiments were conducted in vitro on murine P388D1 cells with increasing doses of ethyl methanesulfonate (up to 5 mM), thus covering a large range of DNA damage (up to 80% of DNA in the tail). The present study (1) validates our 'in house' fully automatic system versus a widely used semi-automatic commercial system for the image-analysis step, and versus the human eye for the image acquisition step, (2) shows that computing tail DNA a posteriori on the basis of a curve fitting concept that combines intensity profiles [G. Dehon, P. Bogaerts, P. Duez, L. Catoire, J. Dubois, Curve fitting of combined comet intensity profiles: a new global concept to quantify DNA damage by the comet assay, Chemom. Intell. Lab. Syst. 73 (2004) 235-243] gives results not significantly different from the 'classical' approach but is much more accurate and easy to undertake and (3) demonstrates that, with these increased performances, the number of comets to be scored can be reduced to a minimum of 20 comets per slide without sacrificing statistical reliability. PMID:18160335
Chatzopoulos, E.; Wheeler, J. Craig; Vinko, J.; Horvath, Z. L.; Nagy, A.
2013-08-10
We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including {sup 56}Ni and {sup 56}Co radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich superluminous supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx, and SN 2010kd, as well as to the interacting SN 2008iy and PTF 09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to the LCs of most of the events we examine. We discuss the possibility that collision of supernova ejecta with hydrogen-deficient CSM accounts for some of the hydrogen-deficient SLSNe (SLSN-I) and may be a plausible explanation for the explosion mechanism of SN 2007bi, the pair-instability supernova candidate. We characterize and discuss issues of parameter degeneracy.
... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...
New Horizons approach photometry of Pluto and Charon: light curves and Solar phase curves
NASA Astrophysics Data System (ADS)
Zangari, A. M.; Buie, M. W.; Buratti, B. J.; Verbiscer, A.; Howett, C.; Weaver, H. A., Jr.; Olkin, C.; Ennico Smith, K.; Young, L. A.; Stern, S. A.
2015-12-01
While the most captivating images of Pluto and Charon were shot by NASA's New Horizons probe on July 14, 2015, the spacecraft also imaged Pluto with its LOng Range Reconnaissance Imager ("LORRI") during its Annual Checkouts and Approach Phases, with campaigns in July 2013, July 2014, January 2015, March 2015, April 2015, May 2015 and June 2015. All but the first campaign provided full coverage of Pluto's 6.4 day rotation. Even though many of these images were taken when surface features on Pluto and Charon were unresolved, these data provide a unique opportunity to study Pluto over a timescale of several months. Earth-based data from an entire apparition must be combined to create a single light curve, as Pluto is never otherwise continuously available for observing due to daylight, weather and scheduling. From the spacecraft, Pluto's sub-observer latitude remained constant to within 0.05 degrees of 43.15 degrees, comparable to a week's worth of change as seen from Earth near opposition. During the July 2013 to June 2015 period, Pluto's solar phase curve increased from 11 degrees to 15 degrees, a small range, but large compared to Earth's 2 degree limit. The slope of the solar phase curve hints at properties such as surface roughness. Using PSF photometry that takes into account the ever-increasing sizes of Pluto and Charon as seen from New Horizons, as well as surface features discovered at closest approach, we present rotational light curves and solar phase curves of Pluto and Charon. We will connect these observations to previous measurements of the system from Earth.
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user’s own voice as too loud, “boomy,” or “hollow.” This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches.
Winkler, Alexandra; Latzel, Matthias; Holube, Inga
2016-02-15
One of the main issues in hearing-aid fittings is the abnormal perception of the user's own voice as too loud, "boomy," or "hollow." This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings.
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches.
Winkler, Alexandra; Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user's own voice as too loud, "boomy," or "hollow." This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Messori, A
1997-03-01
The analysis of published survival curves can be the basis for incremental cost-effectiveness evaluations in which two treatments are compared with each other in terms of cost per life-year saved. The typical case is when a new treatment becomes available which is more effective and more expensive than the corresponding standard treatment. When effectiveness is expressed using the end-point of mortality, cost-effectiveness analysis can compare the (incremental) cost associated with the new treatment with the (incremental) clinical benefit measured in terms of number of life-years gained. The (incremental) cost-effectiveness ratio is therefore quantified as cost per life-year gained. This pharmacoeconomic methodology requires that the total patients years for the treatment and the control groups are estimated from their respective survival curves. We describe herein a survival-curve fitting method which carries our this estimation and a computer program implementing the entire procedure. Our method is based on a non-linear least-squares analysis in which the experimental points of the survival curve are fitted to the Gompertz function. The availability of a commercial program (PCNONLIN) is needed to carry out matrix handling calculations. Our procedure performs the estimation of the best-fit parameters from the survival curve data and then integrates the Gompertz survival function from zero-time to infinity. This integration yields the value of the area under the survival curve (AUC) which is an estimate of the number of patients years totalled in the population examined. If this AUC estimation is performed separately for the two survival curves of two treatments being compared, the difference between the two AUCs permits to determine the incremental number of patient years gained using the more effective of the two treatments as opposed to the other. The cost-effectiveness analysis can consequently be carried out. An example of application of this methodology is
Learning curve for the anterior approach total hip arthroplasty.
Goytia, Robin N; Jones, Lynne C; Hungerford, Marc W
2012-01-01
The anterior approach to total hip arthroplasty has the advantages of using intermuscular and internervous planes, but it is technically demanding. We evaluated the learning curve for this approach with regard to operative parameters and immediate outcomes. From November 2005 through May 2007, 73 patients underwent 81 consecutive primary anterior-approach total hip arthroplasties. We grouped the hips into three consecutive groups of 20 and one of 21, and surgical and fluoroscopy times, estimated blood loss, intraoperative and postoperative complications, patient comorbidities, component position, and leg-length discrepancy were compared (statistical significance, p < 0.05). Comparing Groups 1 and 4, there were only two significant differences: operative time, 124 to 98 minutes, respectively, and estimated blood loss, 596 to 347 mL, respectively. Proficiency improved after Group 2 (40 cases) and was more marked after Group 3 (60 cases), with no major complications. Surgeons considering this approach should expect a substantial learning period.
Shaw, Jessica; Janulis, Patrick
2016-10-01
Recently, there has been a call for more advanced analytic techniques in violence against women research, particularly in community interventions that use longitudinal designs. The current study re-evaluates experimental evaluation data from a sexual violence bystander intervention program. Using an exploratory latent growth curve approach, we were able to model the longitudinal growth trajectories of individual participants over the course of the entire study. Although the results largely confirm the original evaluation findings, the latent growth curve approach better fits the demands of "messy" data (e.g., missing data, varying number of time points per participant, and unequal time spacing within and between participants) that are frequently obtained during a community-based intervention. The benefits of modern statistical techniques to practitioners and researchers in the field of sexual violence prevention, and violence against women more generally, are further discussed.
Fitness for duty: a no-nonsense approach
Dew, S.M.; Hill, A.O.
1987-01-01
In formulating the fitness-for-duty program at Houston Lighting and Power (HL and P), the project and plant staffs followed program guidelines developed by the Edison Electric Institute and considered the performance criteria for the fitness-for-duty programs developed by the Institute of Nuclear Power Operations. The staff visited utilities involved in fitness-for-duty implementation to review the problems and successes experienced by those utilities. On November 1, 1985, the nuclear group vice-president instituted the South Texas Project Fitness-for-Duty Policy to become effective on January 1, 1986. It was important to implement the program at that time, as the project moved to the final stages of construction and preparation for plant operations. The South Texas Project has made a firm commitment to the industry with our fitness-for-duty program. The no-nonsense approach to illegal drug and alcohol use enables to assure a high level of employee health, productivity, and safety in a drug- and alcohol-free environment. The cost of the fitness-for-duty program is minimal when compared to the increase in productivity and the heightened confidence in workers by the US Nuclear Regulatory Commission since implementation of this program.
A Model-Fitting Approach to Characterizing Polymer Decomposition Kinetics
Burnham, A K; Weese, R K
2004-07-20
The use of isoconversional, sometimes called model-free, kinetic analysis methods have recently gained favor in the thermal analysis community. Although these methods are very useful and instructive, the conclusion that model fitting is a poor approach is largely due to improper use of the model fitting approach, such as fitting each heating rate separately. The current paper shows the ability of model fitting to correlate reaction data over very wide time-temperature regimes, including simultaneous fitting of isothermal and constant heating rate data. Recently published data on cellulose pyrolysis by Capart et al. (TCA, 2004) with a combination of an autocatalytic primary reaction and an nth-order char pyrolysis reaction is given as one example. Fits for thermal decomposition of Estane, Viton-A, and Kel-F over very wide ranges of heating rates is also presented. The Kel-F required two parallel reactions--one describing a small, early decomposition process, and a second autocatalytic reaction describing the bulk of pyrolysis. Viton-A and Estane also required two parallel reactions for primary pyrolysis, with the first Viton-A reaction also being a minor, early process. In addition, the yield of residue from these two polymers depends on the heating rate. This is an example of a competitive reaction between volatilization and char formation, which violates the basic tenet of the isoconversional approach and is an example of why it has limitations. Although more complicated models have been used in the literature for this type of process, we described our data well with a simple addition to the standard model in which the char yield is a function of the logarithm of the heating rate.
A comparison of approaches in fitting continuum SEDs
NASA Astrophysics Data System (ADS)
Liu, Yao; Madlener, David; Wolf, Sebastian; Wang, Hong-Chi
2013-04-01
We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sager, D.
1973-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Mugge, P. H.
1974-01-01
Simplified curve fits for the thermodynamic properties of equilibrium air were devised for use in either the time-dependent or shock-capturing computational methods. For the time-dependent method, curve fits were developed for p = p(e, rho), a = a(e, rho), and T = T(e, rho). For the shock-capturing method, curve fits were developed for h = h(p, rho) and T = T(p, rho). The ranges of validity for these curves fits were for temperatures up to 25,000 K and densities from 10 to the minus 7th power to 10 to the 3d power amagats. These approximate curve fits are considered particularly useful when employed on advanced computers such as the Burroughs ILLIAC 4 or the CDC STAR.
A measure of curve fitting error for noise filtering diffusion tensor MRI data
NASA Astrophysics Data System (ADS)
Papadakis, Nikos G.; Martin, Kay M.; Wilkinson, Iain D.; Huang, Chris L.-H.
2003-09-01
A parameter, χ2p, based on the fitting error was introduced as a measure of reliability of DT-MRI data, and its properties were investigated in simulations and human brain data. Its comparison with the classic χ2 revealed its sensitivity to both the goodness of fit and the pixel signal-to-noise-ratio (SNR), unlike the classic χ2, which is sensitive only to the goodness of fit. The new parameter was thus able to separate effectively pixels with coherent signals (having small fitting error and/or high SNR) from those with random signals (having inconsistent fitting and/or low SNR). A practical advantage of χ2p over the classic χ2 was that χ2p is quantified directly from the data of each pixel, without requiring accurate estimation of data-dependent parameters (such as noise variance), which often makes application of the classic χ2 problematic. Analytical approximations of χ2p enabled an objective (data-independent) and automated calculation of a threshold value, used for internal scaling of the χ2p map. Apart from assessing data reliability on a pixel-by-pixel basis, χ2p was used to develop an objective and generic methodology for the exclusion of pixels with unreliable DT information by discarding pixels with χ2p values exceeding the threshold. Pixels corresponding to very low SNR, and poorly fitted cerebrospinal fluid and surrounding brain tissue, had increased χ2p values and were successfully excluded, providing DT anisotropy maps free from artifactual anisotropic appearance.
Reliability of temperature determination from curve-fitting in multi-wavelength pyrometery
Ni, P. A.; More, R. M.; Bieniosek, F. M.
2013-08-04
Abstract This paper examines the reliability of a widely used method for temperature determination by multi-wavelength pyrometry. In recent WDM experiments with ion-beam heated metal foils, we found that the statistical quality of the fit to the measured data is not necessarily a measure of the accuracy of the inferred temperature. We found a specific example where a second-best fit leads to a more realistic temperature value. The physics issue is the wavelength-dependent emissivity of the hot surface. We discuss improvements of the multi-frequency pyrometry technique, which will give a more reliable determination of the temperature from emission data.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Ferreira, Abílio G T; Henrique, Douglas S; Vieira, Ricardo A M; Maeda, Emilyn M; Valotto, Altair A
2015-03-01
The objective of this study was to evaluate four mathematical models with regards to their fit to lactation curves of Holstein cows from herds raised in the southwestern region of the state of Parana, Brazil. Initially, 42,281 milk production records from 2005 to 2011 were obtained from "Associação Paranaense de Criadores de Bovinos da Raça Holandesa (APCBRH)". Data lacking dates of drying and total milk production at 305 days of lactation were excluded, resulting in a remaining 15,142 records corresponding to 2,441 Holstein cows. Data were sorted according to the parity order (ranging from one to six), and within each parity order the animals were divided into quartiles (Q25%, Q50%, Q75% and Q100%) corresponding to 305-day lactation yield. Within each parity order, for each quartile, four mathematical models were adjusted, two of which were predominantly empirical (Brody and Wood) whereas the other two presented more mechanistic characteristics (models Dijkstra and Pollott). The quality of fit was evaluated by the corrected Akaike information criterion. The Wood model showed the best fit in almost all evaluated situations and, therefore, may be considered as the most suitable model to describe, at least empirically, the lactation curves of Holstein cows raised in Southwestern Parana.
Learning Curves: Making Quality Online Health Information Available at a Fitness Center.
Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A
2012-01-01
Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.
Learning Curves: Making Quality Online Health Information Available at a Fitness Center
Dobbins, Montie T.; Tarver, Talicia; Adams, Mararia; Jones, Dixie A.
2012-01-01
Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center – Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges. PMID:22545020
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
Fast impedance measurements at very low frequencies using curve fitting algorithms
NASA Astrophysics Data System (ADS)
Piasecki, Tomasz
2015-06-01
The method for reducing the time of impedance measurements at very low frequencies was proposed and implemented. The reduction was achieved by using impedance estimation algorithms that do not require the acquisition of the momentary voltage and current values for at least one whole period of the excitation signal. The algorithms were based on direct least squares ellipse and sine fitting to recorded waveforms. The performance of the algorithms was evaluated based on the sampling time, signal-to-noise (S/N) ratio and sampling frequency using a series of Monte Carlo experiments. An improved algorithm for the detection of the ellipse direction was implemented and compared to a voting algorithm. The sine fitting algorithm provided significantly better results. It was less sensitive to the sampling start point and measured impedance argument and did not exhibit any systematic error of impedance estimation. It allowed a significant reduction of the measurement time. A 1% standard deviation of impedance estimation was achieved using a sine fitting algorithm with a measurement time reduced to 11% of the excitation signal period.
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.
2011-02-01
We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
Lou, Z; Yang, W J; Stein, P D
1993-01-01
An analysis was performed to determine the error that results from the estimation of the wall shear rates based on linear and quadratic curve-fittings of the measured velocity profiles. For steady, fully developed flow in a straight vessel, the error for the linear method is linearly related to the distance between the probe and the wall, dr1, and the error for the quadratic method is zero. With pulsatile flow, especially a physiological pulsatile flow in a large artery, the thickness of the velocity boundary layer, delta is small, and the error in the estimation of wall shear based on curve fitting is much higher than that with steady flow. In addition, there is a phase lag between the actual shear rate and the measured one. In oscillatory flow, the error increases with the distance ratio dr1/delta and, for a quadratic method, also with the distance ratio dr2/dr1, where dr2 is the distance of the second probe from the wall. The quadratic method has a distinct advantage in accuracy over the linear method when dr1/delta < 1, i.e. when the first velocity point is well within the boundary layer. The use of this analysis in arterial flow involves many simplifications, including Newtonian fluid, rigid walls, and the linear summation of the harmonic components, and can provide more qualitative than quantitative guidance. PMID:8478343
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
A covariance fitting approach for correlated acoustic source mapping.
Yardibi, Tarik; Li, Jian; Stoica, Petre; Zawodny, Nikolas S; Cattafesta, Louis N
2010-05-01
Microphone arrays are commonly used for noise source localization and power estimation in aeroacoustic measurements. The delay-and-sum (DAS) beamformer, which is the most widely used beamforming algorithm in practice, suffers from low resolution and high sidelobe level problems. Therefore, deconvolution approaches, such as the deconvolution approach for the mapping of acoustic sources (DAMAS), are often used for extracting the actual source powers from the contaminated DAS results. However, most deconvolution approaches assume that the sources are uncorrelated. Although deconvolution algorithms that can deal with correlated sources, such as DAMAS for correlated sources, do exist, these algorithms are computationally impractical even for small scanning grid sizes. This paper presents a covariance fitting approach for the mapping of acoustic correlated sources (MACS), which can work with uncorrelated, partially correlated or even coherent sources with a reasonably low computational complexity. MACS minimizes a quadratic cost function in a cyclic manner by making use of convex optimization and sparsity, and is guaranteed to converge at least locally. Simulations and experimental data acquired at the University of Florida Aeroacoustic Flow Facility with a 63-element logarithmic spiral microphone array in the absence of flow are used to demonstrate the performance of MACS. PMID:21117743
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
NASA Astrophysics Data System (ADS)
Thomas, Christian L.
2006-06-01
Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
A Comprehensive Approach for Assessing Person Fit with Test-Retest Data
ERIC Educational Resources Information Center
Ferrando, Pere J.
2014-01-01
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.
ERIC Educational Resources Information Center
Buchanan, Trey
Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…
NASA Astrophysics Data System (ADS)
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
Predicting Future Trends in Adult Fitness Using the Delphi Approach.
ERIC Educational Resources Information Center
Murray, William F.; Jarman, Boyd O.
1987-01-01
This study examines the future of adult fitness from the perspective of experts. The Delphi Technique was used as a measurement tool. Findings revealed that the experts most relied on increased awareness of health and fitness among the elderly as a significant predictor variable. (Author/CB)
Sorrell, Steve; Speirs, Jamie
2014-01-13
There is growing concern about the depletion of hydrocarbon resources and the risk of near-term peaks in production. These concerns hinge upon contested estimates of the recoverable resources of different regions and the associated forecasts of regional production. Beginning with Hubbert, an influential group of analysts have used growth curves both to estimate recoverable resources and to forecast future production. Despite widespread use, these 'curve-fitting' techniques remain a focus of misunderstanding and dispute. The aim of this paper is to classify and explain these techniques and to identify both their relative suitability in different circumstances and the expected level of confidence in their results. The paper develops a mathematical framework that maps curve-fitting techniques onto the available data for conventional oil and highlights the critical importance of the so-called 'reserve growth'. It then summarizes the historical origins, contemporary application and strengths and weaknesses of each group of curve-fitting techniques and uses illustrative data from a number of oil-producing regions to explore the extent to which these techniques provide consistent estimates of recoverable resources. The paper argues that the applicability of curve-fitting techniques is more limited than adherents claim, the confidence bounds on the results are wider than commonly assumed and the techniques have a tendency to underestimate recoverable resources.
Chen, W; Lin, C C; Peng, C T; Li, C I; Wu, H C; Chiang, J; Wu, J Y; Huang, P C
2002-08-01
Current body mass index (BMI) norms for children and adolescents are developed from a reference population that includes obese and slim subjects. The validity of these norms is influenced by the observed secular increase in body weight and BMI. We hypothesized that the performance of children in health-related physical fitness tests would be negatively related to increased BMIs, and therefore fitness tests might be used as criteria for developing a more appropriate set of BMI norms. We evaluated the existing data from a nation-wide fitness survey for students in Taiwan (444 652 boys and 433 555 girls) to examine the relationship between BMI and fitness tests. The fitness tests used included: an 800/1600-m run/walk; a standing long jump; bent-leg curl-ups; and a sit-and-reach test. The BMI percentiles developed from the subgroup whose test scores were better than the 'poor' quartile in all four tests were compared with those of the whole population and linked to the adult criteria for overweight and obesity. The BMIs were significantly related to the results of fitness testing. A total of 43% of students had scores better than the poorest quartile in all of their tests. The upper BMI percentile curves of this fitter subgroup were lower than those of the total population. The 85th and 95th BMI percentile values of the fitter 18-year-old-students (23.7 and 25.5 kg m(-2) for boys; 22.6 and 24.6 kg m(-2) for girls) linked well with the adult cut-off points of 23 and 25 kg m(-2), which have been recommended as the Asian criteria for adult overweight and obesity. Hence, the BMI norms for children and adolescents could be created from selected subgroups that have better physical fitness. We expect that the new norms based on this approach will be used not only to assess the current status of obesity or overweight, but also to encourage activity and exercise.
Fitting of m*/m with Divergence Curve for He3 Fluid Monolayer using Hole-driven Mott Transition
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tak
2012-02-01
The electron-electron interaction for strongly correlated systems plays an important role in formation of an energy gap in solid. The breakdown of the energy gap is called the Mott metal-insulator transition (MIT) which is different from the Peierls MIT induced by breakdown of electron-phonon interaction generated by change of a periodic lattice. It has been known that the correlated systems are inhomogeneous. In particular, He3 fluid monolayer [1] and La1-xSrxTiO3 [2] are representative strongly correlated systems. Their doping dependence of the effective mass of carrier in metal, m*/m, indicating the magnitude of correlation (Coulomb interaction) between electrons has a divergence behavior. However, the fitting remains unfitted to be explained by a Mott-transition theory with divergence. In the case of He3 regarded as the Fermi system with one positive charge (2 electrons + 3 protons), the interaction between He3 atoms is regarded as the correlation in strongly correlated system. In this presentation, we introduce a Hole-driven MIT with a divergence near the Mott transition [3] and fit the m*/m curve in He3 [1] and La1-xSrxTiO3 systems with the Hole-driven MIT with m*/m=1/(1-ρ^4) where ρ is band filling. Moreover, it is shown that the physical meaning of the effective mass with the divergence is percolation in which m*/m increases with increasing doping concentration, and that the magnitude of m*/m is constant.[4pt] [1] Phys. Rev. Lett. 90, 115301 (2003).[0pt] [2] Phys. Rev. Lett. 70, 2126 (1993).[0pt] [3] Physica C 341-348, 259 (2000); Physica C 460-462, 1076 (2007).
A new approach to the analysis of Mira light curves
NASA Technical Reports Server (NTRS)
Mennessier, M. O.; Barthes, D.; Mattei, J. A.
1990-01-01
Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, F. H.
1990-01-01
Techniques are developed for quantitative retrievals of high spatial resolution column atmospheric water vapor that is largely contained in the lower portion of the troposphere. One method consists of curve fitting observed spectra with simulated spectra in the 1.14 microns or the 0.94 micron water vapor band absorption region. The other method is a simple band ratioing technique, which requires less computer time than the curve fitting method. The advantage of the technique over humidity sounding by IR emission measurements is that the retrieved column water vapor amounts over land surfaces have significantly higher precision.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Weston, R J; Weston, M E; Bollinger, R O
1984-01-01
A non-linear curve-fitting program using a modified Hoerl's function on the Hewlett-Packard HP-97 and Texas Instruments TI-59 programmable calculators for the determination of Phadezaym IgE PRIST (IgE) results is described. Excellent correlation between the reference serum concentration and the curve fit concentration results were obtained. The equation used in the curve fit is ln y = A + B ln x + CxD, where A, B, C and an accuracy of fit term R are calculated by the program. The value of D must be specified by the user before the curve fit is performed.
Effectiveness of a teleaudiology approach to hearing aid fitting.
Blamey, Peter J; Blamey, Jeremy K; Saunders, Elaine
2015-12-01
This research was conducted to evaluate the efficacy of an online speech perception test (SPT) for the measurement of hearing and hearing aid fitting in comparison with conventional methods. Phase 1 was performed with 88 people to evaluate the SPT for the detection of significant hearing loss. The SPT had high sensitivity (94%) and high selectivity (98%). In Phase 2, phonetic stimulus-response matrices derived from the SPT results for 408 people were used to calculate "Infograms™." At every frequency, there was a highly significant correlation (p < 0.001) between hearing thresholds derived from the Infogram and conventional audiograms. In Phase 3, initial hearing aid fittings were derived from conventional audiograms and Infograms for two groups of hearing impaired people. Unaided and aided SPTs were used to measure the perceptual benefit of the aids for the two groups. The mean increases between unaided and aided SPT scores were 19.6%, and 22.2% (n = 517, 484; t = 2.2; p < 0.05) for hearing aids fitted using conventional audiograms and Infograms respectively. The research provided evidence that the SPT is a highly effective tool for the detection and measurement of hearing loss and hearing aid fitting. Use of the SPT reduces the costs and increases the effectiveness of hearing aid fitting, thereby enabling a sustainable teleaudiology business model.
Effectiveness of a teleaudiology approach to hearing aid fitting.
Blamey, Peter J; Blamey, Jeremy K; Saunders, Elaine
2015-12-01
This research was conducted to evaluate the efficacy of an online speech perception test (SPT) for the measurement of hearing and hearing aid fitting in comparison with conventional methods. Phase 1 was performed with 88 people to evaluate the SPT for the detection of significant hearing loss. The SPT had high sensitivity (94%) and high selectivity (98%). In Phase 2, phonetic stimulus-response matrices derived from the SPT results for 408 people were used to calculate "Infograms™." At every frequency, there was a highly significant correlation (p < 0.001) between hearing thresholds derived from the Infogram and conventional audiograms. In Phase 3, initial hearing aid fittings were derived from conventional audiograms and Infograms for two groups of hearing impaired people. Unaided and aided SPTs were used to measure the perceptual benefit of the aids for the two groups. The mean increases between unaided and aided SPT scores were 19.6%, and 22.2% (n = 517, 484; t = 2.2; p < 0.05) for hearing aids fitted using conventional audiograms and Infograms respectively. The research provided evidence that the SPT is a highly effective tool for the detection and measurement of hearing loss and hearing aid fitting. Use of the SPT reduces the costs and increases the effectiveness of hearing aid fitting, thereby enabling a sustainable teleaudiology business model. PMID:26556060
A computational approach to the twin paradox in curved spacetime
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng
2016-09-01
Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Neitzel, Anne-Christin; Stamer, Eckhard; Junge, Wolfgang; Thaller, Georg
2015-05-01
Laboratory somatic cell count (LSCC) records are usually recorded monthly and provide an important information source for breeding and herd management. Daily milk viscosity detection in composite milking (expressed as drain time) with an automated on-line California Mastitis Test (CMT) could serve immediately as an early predictor of udder diseases and might be used as a selection criterion to improve udder health. The aim of the present study was to clarify the relationship between the well-established LSCS and the new trait,'drain time', and to estimate their correlations to important production traits. Data were recorded on the dairy research farm Karkendamm in Germany. Viscosity sensors were installed on every fourth milking stall in the rotary parlour to measure daily drain time records. Weekly LSCC and milk composition data were available. Two data sets were created containing records of 187,692 milkings from 320 cows (D1) and 25,887 drain time records from 311 cows (D2). Different fixed effect models, describing the log-transformed drain time (logDT), were fitted to achieve applicable models for further analysis. Lactation curves were modelled with standard parametric functions (Ali and Schaeffer, Legendre polynomials of second and third degree) of days in milk (DIM). Random regression models were further applied to estimate the correlations between cow effects between logDT and LSCS with further important production traits. LogDT and LSCS were strongest correlated in mid-lactation (r = 0.78). Correlations between logDT and production traits were low to medium. Highest correlations were reached in late lactation between logDT and milk yield (r = -0.31), between logDT and protein content (r = 0.30) and in early as well as in late lactation between logDT and lactose content (r = -0.28). The results of the present study show that the drain time could be used as a new trait for daily mastitis control. PMID:25731191
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
A Global Fitting Approach For Doppler Broadening Thermometry
NASA Astrophysics Data System (ADS)
Amodio, Pasquale; Moretti, Luigi; De Vizia, Maria Domenica; Gianfrani, Livio
2014-06-01
Very recently, a spectroscopic determination of the Boltzmann constant, kB, has been performed at the Second University of Naples by means of a rather sophisticated implementation of Doppler Broadening Thermometry (DBT)1. Performed on a 18O-enriched water sample, at a wavelength of 1.39 µm, the experiment has provided a value for kB with a combined uncertainty of 24 parts over 106, which is the best result obtained so far, by using an optical method. In the spectral analysis procedure, the partially correlated speed-dependent hard-collision (pC-SDHC) model was adopted. The uncertainty budget has clearly revealed that the major contributions come from the statistical uncertainty (type A) and from the uncertainty associated to the line-shape model (type B)2. In the present work, we present the first results of a theoretical and numerical work aimed at reducing these uncertainty components. It is well known that molecular line shapes exhibit clear deviations from the time honoured Voigt profile. Even in the case of a well isolated spectral line, under the influence of binary collisions, in the Doppler regime, the shape can be quite complicated by the joint occurrence of velocity-change collisions and speed-dependent effects. The partially correlated speed-dependent Keilson-Storer profile (pC-SDKS) has been recently proposed as a very realistic model, capable of reproducing very accurately the absorption spectra for self-colliding water molecules, in the near infrared3. Unfortunately, the model is so complex that it cannot be implemented into a fitting routine for the analysis of experimental spectra. Therefore, we have developed a MATLAB code to simulate a variety of H218O spectra in thermodynamic conditions identical to the one of our DBT experiment, using the pC-SDKS model. The numerical calculations to determine such a profile have a very large computational cost, resulting from a very sophisticated iterative procedure. Hence, the numerically simulated spectra
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G
2004-02-01
Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
A new approach for magnetic curves in 3D Riemannian manifolds
Bozkurt, Zehra Gök, Ismail Yaylı, Yusuf Ekmekci, F. Nejat
2014-05-15
A magnetic field is defined by the property that its divergence is zero in a three-dimensional oriented Riemannian manifold. Each magnetic field generates a magnetic flow whose trajectories are curves called as magnetic curves. In this paper, we give a new variational approach to study the magnetic flow associated with the Killing magnetic field in a three-dimensional oriented Riemann manifold, (M{sup 3}, g). And then, we investigate the trajectories of the magnetic fields called as N-magnetic and B-magnetic curves.
Estimating yield curve the Svensson extended model using L-BFGS-B method approach
NASA Astrophysics Data System (ADS)
Muslim, Rosadi, Dedi; Gunardi, Abdurakhman
2015-02-01
Yield curve is curves that describe the magnitude of the yield against maturity. To describe this curve, we use the Svensson model. One extension of this model is Rezende-Ferreira. Expansion undertaken by Rezende-Ferreira has weaknesses that there are several parameters have the same value. These values form Nelson-Siegel model. In this paper, we propose expansion of Svensson model. These models are non-linear model, so it is more difficult to estimate. To overcome this problem, we propose Nonlinear Least Square by L-BFGS-B method approach.
ERIC Educational Resources Information Center
Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex
2009-01-01
The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the organization;…
ERIC Educational Resources Information Center
Rousseau, Ronald
1994-01-01
Discussion of informetric distributions shows that generalized Leimkuhler functions give proper fits to a large variety of Bradford curves, including those exhibiting a Groos droop or a rising tail. The Kolmogorov-Smirnov test is used to test goodness of fit, and least-square fits are compared with Egghe's method. (Contains 53 references.) (LRW)
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations.
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations. PMID:26952004
An interactive approach to surface-fitting complex geometries for flowfield applications
NASA Technical Reports Server (NTRS)
Dejarnette, Fred R.; Hamilton, H. Harris, II; Cheatwood, F. Mcneil
1987-01-01
Numerical flowfield methods require a geometry subprogram which can calculate body coordinates, slopes, and radii of curvature for typical aircraft and spacecraft configurations. The objective of this paper is to develop a new surface-fitting technique which addresses two major problems with existing geometry packages: computer storage requirements and the time required of the user for the initial set-up of the geometry model. In the present method, coordinates of cross sections are fit in a least-squares sense using segments of general conic sections. After fitting each cross section, the next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. For the initial setup of the geometry model, an interactive, completely menu-driven computer code has been developed to allow the user to make modifications to the initial fit for a given cross section or meridional cut. Graphic displays are provided to assist the user in the visualization of the effect of each modification. The completed model may be viewed from any angle using the code's three-dimensional graphics package. Geometry results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment (AFE) geometry are presented, in addition to calculated heat-transfer rates based on these models.
NASA Astrophysics Data System (ADS)
Bhattacharya, Kolahal; Banerjee, Sudeshna; Mondal, Naba K.
2016-07-01
In the context of track fitting problems by a Kalman filter, the appropriate functional forms of the elements of the random process noise matrix are derived for tracking through thick layers of dense materials and magnetic field. This work complements the form of the process noise matrix obtained by Mankel [1].
Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.
A Selective Refinement Approach for Computing the Distance Functions of Curves
Laney, D A; Duchaineau, M A; Max, N L
2000-12-01
We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.
NASA Technical Reports Server (NTRS)
Degnan, J. J.; Walker, H. E.; Mcelroy, J. H.; Mcavoy, N.; Zagwodski, T.
1972-01-01
A least squares curve-fitting algorithm is derived which allows the simultaneous estimation of the small signal gain and the saturation intensity from an arbitrary number of data points relating power output to the incidence angle of an internal coupling plate. The method is used to study the dependence of the two parameters on tube pressure and discharge current in a waveguide CO2 laser having a 2 mm diameter capillary. It is found that, at pressures greater than 28 torr, rising CO2 temperature degrades the small signal gain at current levels as low as three milliamperes.
A Simplified Approach To The Control System On A 4-Axis Curve Generator
NASA Astrophysics Data System (ADS)
Langdon, Wayne R.
1989-12-01
The simplest approach in developing a Computerized Numerical Control (CNC) machine tool would be to take a manually-operated machine and fit it with a standard, commercially available computer control system and drive packages, or so it would appear from a machine designer's viewpoint. This approach does tend to minimize the overall machine/system development costs; however, it can result in a control system too sophisticated, overly complex, and more expensive than necessary to control the machine.
ATWS Analysis with an Advanced Boiling Curve Approach within COBRA 3-CP
Gensler, A.; Knoll, A.; Kuehnel, K.
2007-07-01
In 2005 the German Reactor Safety Commission issued specific requirements on core coolability demonstration for PWR ATWS (anticipated transients without scram). Thereupon AREVA NP performed detailed analyses for all German PWRs. For a German KONVOI plant the results of an ATWS licensing analysis are presented. The plant dynamic behavior is calculated with NLOOP, while the hot channel analysis is performed with the thermal hydraulic computer code COBRA 3-CP. The application of the fuel rod model included in COBRA 3-CP is essential for this type of analysis. Since DNB (departure from nucleate boiling) occurs, the advanced post DNB model (advanced boiling curve approach) of COBRA 3-CP is used. The results are compared with those gained with the standard BEEST model. The analyzed ATWS case is the emergency power case 'loss of main heat sink with station service power supply unavailable'. Due to the decreasing coolant flow rate during the transient the core attains film boiling conditions. The results of the hot channel analysis strongly depend on the performance of the boiling curve model. The BEEST model is based on pool boiling conditions whereas typical PWR conditions - even in most transients - are characterized by forced flow for which the advanced boiling curve approach is particularly suitable. Compared with the BEEST model the advanced boiling curve approach in COBRA 3-CP yields earlier rewetting, i.e. a shorter period in film boiling. Consequently, the fuel rod cladding temperatures, that increase significantly due to film boiling, drop back earlier and the high temperature oxidation is significantly diminished. The Baker-Just-Correlation was used to calculate the value of equivalent cladding reacted (ECR), i.e. the reduction of cladding thickness due to corrosion throughout the transient. Based on the BEEST model the ECR value amounts to 0.4% whereas the advanced boiling curve only leads to an ECR value of 0.2%. Both values provide large margins to the 17
Hill, K.
1988-06-01
The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.
Calzado, Carmen J
2013-01-21
This paper reports a theoretical analysis of the electronic structure and magnetic properties of a tetranuclear Cu(II) complex, [Cu(4) (HL)(4)], which has a 4+2 cubane-like structure (H(3) L=N,N'-(2-hydroxypropane-1,3-diyl)bis(acetylacetoneimine)). These theoretical calculations indicate a quintet (S=2) ground state; the energy-level distribution of the magnetic states confirm Heisenberg behaviour and correspond to an S(4) spin-spin interaction model. The dominant interaction is the ferromagnetic coupling between the pseudo-dimeric units (J(1) =22.2 cm(-1)), whilst a weak and ferromagnetic interaction is found within the pseudo-dimeric units (J(2) =1.4 cm(-1)). The amplitude and sign of these interactions are consistent with the structure and arrangement of the magnetic Cu 3d orbitals; they accurately simulate the thermal dependence of magnetic susceptibility, but do not agree with the reported J values (J(1) =38.4 cm(-1), J(2) =-18.0 cm(-1)) that result from the experimental fitting. This result is not an isolated case; many other polynuclear systems, in particular 4+2 Cu(II) cubanes, have been reported in which the fitted magnetic terms are not consistent with the geometrical features of the system. In this context, theoretical evaluation can be considered as a valuable tool in the interpretation of the macroscopic behaviour, thus providing clues for a rational and directed design of new materials with specific properties.
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Lebon, M; Reiche, I; Fröhlich, F; Bahain, J-J; Falguères, C
2008-12-01
Derivative Fourier transform infrared (FTIR) spectroscopy and curve fitting have been used to investigate the effect of a thermal treatment on the nu(1)nu(3) PO(4) domain of modern bones. This method was efficient for identifying mineral matter modifications during heating. In particular, the 961, 1022, 1061, and 1092 cm(-1) components show an important wavenumber shift between 120 and 700 degrees C, attributed to the decrease of the distortions induced by the removal of CO(3)(2-) and HPO(4)(2-) ions from the mineral lattice. The so-called 1030/1020 ratio was used to evaluate crystalline growth above 600 degrees C. The same analytical protocol was applied on Magdalenian fossil bones from the Bize-Tournal Cave (France). Although the band positions seem to have been affected by diagenetic processes, a wavenumber index--established by summing of the 961, 1022, and 1061 cm(-1) peak positions--discriminated heated bones better than the 1030/1020 ratio, and the splitting factor frequently used to identify burnt bones in an archaeological context. This study suggest that the combination of derivative and curve-fitting analysis may afford a sensitive evaluation of the maximum temperature reached, and thus contribute to the fossil-derived knowledge of human activities related to the use of fire.
NASA Astrophysics Data System (ADS)
Westberg, Jonas; Wang, Junyang; Axner, Ove
2012-11-01
Wavelength modulation (WM) produces lock-in signals that are proportional to various Fourier coefficients of the modulated lineshape function of the molecular transition targeted. Unlike the case for the Lorentzian lineshape function, there is no known analytical expression for the Fourier coefficients of a modulated Voigt lineshape function; they consist of nested integrals that have to be solved numerically, which is often time-consuming and prevents real-time curve fitting. Previous attempts to overcome these limitations have so far consisted of approximations of the Voigt lineshape function, which brings in inaccuracies. In this paper we demonstrate a new means to calculate the lineshape of nf-WM absorption signals from a transition with a Voigt profile. It is shown that the signal can conveniently be expressed as a convolution of one or several Fourier coefficients of a modulated Lorentzian lineshape function, for which there are analytical expressions, and the Maxwell-Boltzmann velocity distribution for the system under study. Mathematically, the procedure involves no approximations, wherefore its accuracy is limited only by the numerical precision of the software used (in this case ˜10-16) while the calculation time is reduced by roughly three orders of magnitude (10-3) as compared to the conventional methodology, i.e. typically from the second to the millisecond range. This makes feasible real-time curve fitting to lock-in output signals from modulated Voigt profiles.
Understanding the distribution of fitness effects of mutations by a biophysical-organismal approach
NASA Astrophysics Data System (ADS)
Bershtein, Shimon
2011-03-01
The distribution of fitness effects of mutations is central to many questions in evolutionary biology. However, it remains poorly understood, primarily due to the fact that a fundamental connection that exists between the fitness of organisms and molecular properties of proteins encoded by their genomes is largely overlooked by traditional research approaches. Past efforts to breach this gap followed the ``evolution first'' paradigm, whereby populations were subjected to selection under certain conditions, and mutations which emerged in adapted populations were analyzed using genomic approaches. The results obtained in the framework of this approach, while often useful, are not easily interpretable because mutations get fixed due to a convolution of multiple causes. We have undertaken a conceptually opposite strategy: Mutations with known biophysical and biochemical effects on E. coli's essential proteins (based on computational analysis and in vitro measurements) were introduced into the organism's chromosome and the resulted fitness effects were monitored. Studying the distribution of fitness effects of such fully controlled replacements revealed a very complex fitness landscape, where impact of the microscopic properties of the mutated proteins (folding, stability, and function) is modulated on a macroscopic, whole genome level. Furthermore, the magnitude of the cellular response to the introduced mutations seems to depend on the thermodynamic status of the mutant.
Liu, G H; Wu, J T
1998-01-01
The measurement of PSA is recommended for men over 50 years of age for screening of prostate cancer. However, proper differentiation of prostate cancer from benign prostate hyperplasia (BPH) relies on an accurate measurement of free PSA (fPSA) and a correct calculation of percent fPSA. Because of the extremely low concentration of fPSA in the serum, any slight deviation from its true value may produce large errors in percent fPSA calculated. Therefore, we undertook a study examining carefully those parameters of the fPSA assay which might affect the fPSA determination. We found that the integrity of the calibrator, the computer curve-fitting program selected, the source of the calibrator, and the total PSA or fPSA + PSA complexes (tPSA) concentration of the specimen all had an impact on the accuracy of the fPSA value assayed. We found that an examination of the slope of the calibration curve was important to reveal whether the calibrator had or had not been denatured during storage. We also found that the 4-parameter cure fitting program was best suited for plotting the fPSA calibration curve. The calibrator we isolated from LNCaP cells was acceptable for our assay because it had an affinity for the assay antibody very similar to that of serum fPSA. We also determined the effect of tPSA concentration on the fPSA determinations and found that within the concentration range of 4-10 ng/mL the impact on the percent fPSA calculated was not significant. We believe that our assay produces accurate fPSA values when all these assay parameters are well controlled.
Duval, M; Guilarte Moreno, V; Grün, R
2013-12-01
This work deals with the specific studies of three main sources of uncertainty in electron spin resonance (ESR) dosimetry/dating of fossil tooth enamel: (1) the precision of the ESR measurements, (2) the long-term signal fading the selection of the fitting function. They show a different influence on the equivalent dose (D(E)) estimates. Repeated ESR measurements were performed on 17 different samples: results show a mean coefficient of variation of the ESR intensities of 1.20 ± 0.23 %, inducing a mean relative variability of 3.05 ± 2.29 % in the D(E) values. ESR signal fading over 5 y was also observed: its magnitude seems to be quite sample dependant but is nevertheless especially important for the most irradiated aliquots. This fading has an apparent random effect on the D(E) estimates. Finally, the authors provide new insights and recommendations about the fitting of ESR dose-response curves of fossil enamel with a double saturating exponential (DSE) function. The potential of a new variation of the DSE was also explored. Results of this study also show that the choice of the fitting function is of major importance, maybe more than the other sources previously mentioned, in order to get accurate final D(E) values.
Konzen, Kevin; Brey, Richard
2012-05-01
²²²Rn (radon) and ²²⁰Rn (thoron) progeny are known to interfere with determining the presence of long-lived transuranic radionuclides, such as plutonium and americium, and require from several hours up to several days for conclusive results. Methods are proposed that should expedite the analysis of air samples for determining the amount of transuranic radionuclides present using low-resolution alpha spectroscopy systems available from typical alpha continuous air monitors (CAMs) with multi-channel analyzer (MCA) capabilities. An alpha spectra simulation program was developed in Microsoft Excel visual basic that employed the use of Monte Carlo numerical methods and serial-decay differential equations that resembled actual spectra. Transuranic radionuclides were able to be quantified with statistical certainty by applying peak fitting equations using the method of least squares. Initial favorable results were achieved when samples containing radon progeny were decayed 15 to 30 min, and samples containing both radon and thoron progeny were decayed at least 60 min. The effort indicates that timely decisions can be made when determining transuranic activity using available alpha CAMs with alpha spectroscopy capabilities for counting retrospective air samples if accompanied by analyses that consider the characteristics of serial decay.
Nair, S P; Righetti, R
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
ERIC Educational Resources Information Center
Chen, Zheng; Powell, Gary N.; Greenhaus, Jeffrey H.
2009-01-01
This study adopted a person-environment fit approach to examine whether greater congruence between employees' preferences for segmenting their work domain from their family domain (i.e., keeping work matters at work) and what their employers' work environment allowed would be associated with lower work-to-family conflict and higher work-to-family…
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
B-737 flight test of curved-path and steep-angle approaches using MLS guidance
NASA Technical Reports Server (NTRS)
Branstetter, J. R.; White, W. F.
1989-01-01
A series of flight tests were conducted to collect data for jet transport aircraft flying curved-path and steep-angle approaches using Microwave Landing System (MLS) guidance. During the test, 432 approaches comprising seven different curved-paths and four glidepath angles varying from 3 to 4 degrees were flown in NASA Langley's Boeing 737 aircraft (Transport Systems Research Vehicle) using an MLS ground station at the NASA Wallops Flight Facility. Subject pilots from Piedmont Airlines flew the approaches using conventional cockpit instrumentation (flight director and Horizontal Situation Indicator (HSI). The data collected will be used by FAA procedures specialists to develop standards and criteria for designing MLS terminal approach procedures (TERPS). The use of flight simulation techniques greatly aided the preliminary stages of approach development work and saved a significant amount of costly flight time. This report is intended to complement a data report to be issued by the FAA Office of Aviation Standards which will contain all detailed data analysis and statistics.
CADAVERIC STUDY ON THE LEARNING CURVE OF THE TWO-APPROACH GANZ PERIACETABULAR OSTEOTOMY
Ferro, Fernando Portilho; Ejnisman, Leandro; Miyahara, Helder Souza; Trindade, Christiano Augusto de Castro; Faga, Antônio; Vicente, José Ricardo Negreiros
2016-01-01
Objective : The Bernese periacetabular osteotomy (PAO) is a widely used technique for the treatment of non-arthritic, dysplastic, painful hips. It is considered a highly complex procedure with a steep learning curve. In an attempt to minimize complications, a double anterior-posterior approach has been described. We report on our experience while performing this technique on cadaveric hips followed by meticulous dissection to verify possible complications. Methods : We operated on 15 fresh cadaveric hips using a combined posterior Kocher-Langenbeck and an anterior Smith-Petersen approach, without fluoroscopic control. The PAO cuts were performed and the acetabular fragment was mobilized. A meticulous dissection was carried out to verify the precision of the cuts. Results : Complications were observed in seven specimens (46%). They included a posterior column fracture, and posterior and anterior articular fractures. The incidence of complications decreased over time, from 60% in the first five procedures to 20% in the last five procedures. Conclusions : We concluded that PAO using a combined anterior-posterior approach is a reproducible technique that allows all cuts to be done under direct visualization. The steep learning curve described in the classic single incision approach was also observed when using two approaches. Evidence Level: IV, Cadaveric Study. PMID:26981046
A projection-based approach to diffraction tomography on curved boundaries
NASA Astrophysics Data System (ADS)
Clement, Gregory T.
2014-12-01
An approach to diffraction tomography is investigated for two-dimensional image reconstruction of objects surrounded by an arbitrarily-shaped curve of sources and receivers. Based on the integral theorem of Helmholtz and Kirchhoff, the approach relies upon a valid choice of the Green’s functions for selected conditions along the (possibly-irregular) boundary. This allows field projections from the receivers to an arbitrary external location. When performed over all source locations, it will be shown that the field caused by a hypothetical source at this external location is also known along the boundary. This field can then be projected to new external points that may serve as a virtual receiver. Under such a reformation, data may be put in a form suitable for image construction by synthetic aperture methods. Foundations of the approach are shown, followed by a mapping technique optimized for the approach. Examples formed from synthetic data are provided.
Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang
2012-07-01
The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization.
ERIC Educational Resources Information Center
McDonald, Stefanie R.; Ing, Marsha; Marcoulides, George A.
2010-01-01
This study examined the developmental effects of early parental intrinsic and extrinsic motivational strategies on mathematics achievement scores obtained from White students compared to underrepresented minority students. A latent growth curve model was fit to data from the Longitudinal Study of American Youth (LSAY) with mathematics achievement…
Guidance studies for curved, descending approaches using the Microwave Landing System (MLS)
NASA Technical Reports Server (NTRS)
Feather, J. B.
1986-01-01
Results for the Microwave Landing System (MLS) guidance algorithm development conducted under the Advance Transport Operating System (ATOPS) Technology Studies (NAS1-16202) are documented. The study consisted of evaluating guidance law for vertical and lateral path control, as well as speed control, for approaches not possible with the present Instrument Landing System (ILS) equipment. Several specific approaches were simulated using the MD-80 aircraft simulation program, including curved, descending (segmented glide slope), and decelerating paths. Emphasis was placed on development of guidance algorithms specifically for approaches at Burbank, where proposed flight demonstrations are planned. Results of this simulation phase are suitable for use in future fixed base simulator evaluations employing actual hardware (autopilot and a performance management system).
Aerobic fitness ecological validity in elite soccer players: a metabolic power approach.
Manzi, Vincenzo; Impellizzeri, Franco; Castagna, Carlo
2014-04-01
The aim of this study was to examine the association between match metabolic power (MP) categories and aerobic fitness in elite-level male soccer players. Seventeen male professional soccer players were tested for VO2max, maximal aerobic speed (MAS), VO2 at ventilatory threshold (VO2VT and %VO2VT), and speed at a selected blood lactate concentration (4 mmol·L(-1), V(L4)). Aerobic fitness tests were performed at the end of preseason and after 12 and 24 weeks during the championship. Aerobic fitness and MP variables were considered as mean of all seasonal testing and of 16 Championship home matches for all the calculations, respectively. Results showed that VO2max (from 0.55 to 0.68), MAS (from 0.52 to 0.72), VO2VT (from 0.72 to 0.83), %VO2maxVT (from 0.62 to 0.65), and V(L4) (from 0.56 to 0.73) were significantly (p < 0.05 to 0.001) large to very large associated with MP variables. These results provide evidence to the ecological validity of aerobic fitness in male professional soccer. Strength and conditioning professionals should consider aerobic fitness in their training program when dealing with professional male soccer players. The MP method resulted an interesting approach for tracking external load in male professional soccer players.
NASA Astrophysics Data System (ADS)
Koohestani, Behrooz; Corne, David W.
2009-04-01
The Bandwidth Minimization Problem (BMP) is a graph layout problem which is known to be NP-complete. Since 1960, a considerable number of algorithms have been developed for addressing the BMP. At present, meta-heuristics (such as evolutionary algorithms and tabu search) are popular and successful approaches to the BMP. In such algorithms, the design of the fitness function (i.e. the metric that attempts to guide the search towards high-quality solutions) plays a key role in performance; the fitness function, along with the operators, induce the `search landscape', and careful attention to these issues may lead to landscapes that are more amenable to successful search. For example, rather than simply use the most obvious quality measure (in this case, the bandwidth itself), it is often helpful to design a more informative measure, indicating not only a solutions quality, but also encapsulating (for example) an indication of how distant this particular solution is from even better solutions. In this paper, a new fitness function and an associated new mutation operator are presented for BMP. These are incorporated within a simple Evolutionary Algorithm (EA), and evaluated on a set of 27 instances of the BMP (from the Harwell-Boeing sparse matrix collection). The results of this EA are compared with results obtained by using the standard fitness function (used in almost all previous researches on metaheuristics applied to the BMP). The results indicate clearly that the new fitness function and operator performed provide significantly superior results in the reduction of bandwidth.
Estimating flood-frequency curves with scarce data: a physically-based analytic approach
NASA Astrophysics Data System (ADS)
Basso, Stefano; Schirmer, Mario; Botter, Gianluca
2016-04-01
Predicting magnitude and frequency of floods is a key issue for hazard assessment and mitigation. While observations and statistical methods provide good estimates when long data series are available, their performances deteriorate with limited data. Moreover, the outcome of varying hydroclimatic drivers can hardly be evaluated by these methods. Physically-based approaches embodying mechanics of streamflow generation provide a valuable alternative that may improve purely statistical estimates and cope with human-induced alteration of climate and landscape. In this work, a novel analytic approach is proposed to derive seasonal flood-frequency curves, and to estimate the recurrence intervals of seasonal maxima. The method builds on a stochastic description of daily streamflows, arising from rainfall and soil moisture dynamics in the catchment. The limited number of parameters involved in the formulation embody climate and landscape attributes of the contributing catchment, and can be specified based on daily rainfall and streamflow data. The application to two case studies suggests the model ability to provide reliable estimates of seasonal flood-frequency curves in different climatic settings, and to mimic shapes of flood-frequency curves emerging in persistent and erratic flow regimes. The method is especially valuable when only short data series are available (e.g. newly or temporarily gauged catchments, modified climatic or landscape features). Indeed, estimates provided by the model for high flow events characterized by recurrence times greater than the available sample size do not deteriorate significantly, as compared to performance of purely statistical methods. The proposed physically-based analytic approach represent a first step toward a probabilistic characterization of extremes based on climate and landscape attributes, which may be especially valuable to assess flooding hazard in data scarce regions and support the development of reliable mitigation
Using a Space Filling Curve Approach for the Management of Dynamic Point Clouds
NASA Astrophysics Data System (ADS)
Psomadaki, S.; van Oosterom, P. J. M.; Tijssen, T. P. M.; Baart, F.
2016-10-01
Point cloud usage has increased over the years. The development of low-cost sensors makes it now possible to acquire frequent point cloud measurements on a short time period (day, hour, second). Based on the requirements coming from the coastal monitoring domain, we have developed, implemented and benchmarked a spatio-temporal point cloud data management solution. For this reason, we make use of the flat model approach (one point per row) in an Index Organised Table within a RDBMS and an improved spatio-temporal organisation using a Space Filling Curve approach. Two variants coming from two extremes of the space-time continuum are also taken into account, along with two treatments of the z dimension: as attribute or as part of the space filling curve. Through executing a benchmark we elaborate on the performance - loading and querying time -, and storage required by those different approaches. Finally, we validate the correctness and suitability of our method, through an out-of-the-box way of managing dynamic point clouds.
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A.
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus “infusing” age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs. PMID:26697422
Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E
2012-12-01
An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures.
Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E
2012-12-01
An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures. PMID:22615163
Lin, Shan-Yang; Lin, Hong-Liang; Chi, Ying-Ting; Huang, Yu-Ting; Kao, Chi-Yu; Hsieh, Wei-Hsien
2015-12-30
The amorphous form of a drug has higher water solubility and faster dissolution rate than its crystalline form. However, the amorphous form is less thermodynamically stable and may recrystallize during manufacturing and storage. Maintaining the amorphous state of drug in a solid dosage form is extremely important to ensure product quality. The purpose of this study was to quantitatively determine the amount of amorphous indomethacin (INDO) formed in the Soluplus® solid dispersions using thermoanalytical and Fourier transform infrared (FTIR) spectral curve-fitting techniques. The INDO/Soluplus® solid dispersions with various weight ratios of both components were prepared by air-drying and heat-drying processes. A predominate IR peak at 1683cm(-1) for amorphous INDO was selected as a marker for monitoring the solid state of INDO in the INDO/Soluplus® solid dispersions. The physical stability of amorphous INDO in the INDO/Soluplus® solid dispersions prepared by both drying processes was also studied under accelerated conditions. A typical endothermic peak at 161°C for γ-form of INDO (γ-INDO) disappeared from all the differential scanning calorimetry (DSC) curves of INDO/Soluplus® solid dispersions, suggesting the amorphization of INDO caused by Soluplus® after drying. In addition, two unique IR peaks at 1682 (1681) and 1593 (1591)cm(-1) corresponded to the amorphous form of INDO were observed in the FTIR spectra of all the INDO/Soluplus® solid dispersions. The quantitative amounts of amorphous INDO formed in all the INDO/Soluplus® solid dispersions were increased with the increase of γ-INDO loaded into the INDO/Soluplus® solid dispersions by applying curve-fitting technique. However, the intermolecular hydrogen bonding interaction between Soluplus® and INDO were only observed in the samples prepared by heat-drying process, due to a marked spectral shift from 1636 to 1628cm(-1) in the INDO/Soluplus® solid dispersions. The INDO/Soluplus® solid
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.
2015-12-01
Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.
Corvettes, Curve Fitting, and Calculus
ERIC Educational Resources Information Center
Murawska, Jaclyn M.; Nabb, Keith A.
2015-01-01
Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
Sui Wenbo; Zhu Jingyi; Li Jinyun; Chai Guozhi; Jiang Changjun; Fan Xiaolong; Xue Desheng
2011-05-15
Rotational magnetization curves of the exchange-bias bilayers were investigated based on the Stoner-Wohlfarth model, which can be grouped into three cases according to the magnetization reversal process. The unidirectional anisotropic field H{sub E} = 41.4 Oe, the uniaxial anisotropic field H{sub k} = 4.2 Oe and the accurate direction of the easy axis of our FeNi/FeMn exchange-bias bilayers were obtained by fitting their experimental rotational magnetization curves. During the rotational process the magnetization reversal of the bilayers is a coherent rotation with a critical magnetization reversal field H{sub 1} = 41.372 Oe.
Till, Kevin; Cobley, Steve; Oʼhara, John; Chapman, Chris; Cooke, Carlton
2013-05-01
This study evaluated the development of anthropometric and fitness characteristics of 3 individual adolescent junior rugby league players and compared their characteristics with a cross-sectional population matched by age and skill level. Cross-sectional anthropometric and fitness assessments were conducted on 1,172 players selected to the Rugby Football League's talent development program (i.e., the Player Performance Pathway) between 2005 and 2008. Three players of differing relative age, maturational status, and playing position were measured and tracked once per year on 3 occasions (Under 13s, 14s, 15s age categories) and compared against the cross-sectional population. Results demonstrated that the later maturing players increased height (player 1 = 9.2%; player 2 = 7.8%) and a number of fitness characteristics (e.g., 60-m speed-player 1 = -14.9%; player 2 = -9.9%) more than the earlier maturing player (player 3-Height = 2.0%, 60-m sprint = -0.7%) over the 2-year period. The variation in the development of anthropometric and fitness characteristics between the 3 players highlights the importance of longitudinally monitoring individual characteristics during adolescence to assess the dynamic changes in growth, maturation, and fitness. Findings showcase the limitations of short-term performance assessments at one-off time points within annual-age categories, instead of advocating individual development and progression tracking without deselection. Coaches should consider using an individual approach, comparing data with population averages, to assist in the prescription of appropriate training and lifestyle interventions to aid the development of junior athletes.
Connecting thermal performance curve variation to the genotype: a multivariate QTL approach.
Latimer, C A L; Foley, B R; Chenoweth, S F
2015-01-01
Thermal performance curves (TPCs) are continuous reaction norms that describe the relationship between organismal performance and temperature and are useful for understanding trade-offs involved in thermal adaptation. Although thermal trade-offs such as those between generalists and specialists or between hot- and cold-adapted phenotypes are known to be genetically variable and evolve during thermal adaptation, little is known of the genetic basis to TPCs - specifically, the loci involved and the directionality of their effects across different temperatures. To address this, we took a multivariate approach, mapping quantitative trait loci (QTL) for locomotor activity TPCs in the fly, Drosophila serrata, using a panel of 76 recombinant inbred lines. The distribution of additive genetic (co)variance in the mapping population was remarkably similar to the distribution of mutational (co)variance for these traits. We detected 11 TPC QTL in females and 4 in males. Multivariate QTL effects were closely aligned with the major axes genetic (co)variation between temperatures; most QTL effects corresponded to variation for either overall increases or decreases in activity with a smaller number indicating possible trade-offs between activity at high and low temperatures. QTL representing changes in curve shape such as the 'generalist-specialist' trade-off, thought key to thermal adaptation, were poorly represented in the data. We discuss these results in the light of genetic constraints on thermal adaptation.
NASA Astrophysics Data System (ADS)
Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz
2015-09-01
Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid
2000-06-01
In standard differential geometry, the Fundamental Theorem of Space Curves states that two differential invariants of a curve, namely curvature and torsion, determine its geometry, or equivalently, the isometry class of the curve up to rigid motions in the Euclidean three-dimensional space. Consider a physical model of a space curve made from a sufficiently thin, yet visible rigid wire, and the problem of perceptual identification (by a human observer or a robot) of two given physical model curves. In a previous paper (perceptual geometry) we have emphasized a learning theoretic approach to construct a perceptual geometry of the surfaces in the environment. In particular, we have described a computational method for mathematical representation of objects in the perceptual geometry inspired by the ecological theory of Gibson, and adhering to the principles of Gestalt in perceptual organization of vision. In this paper, we continue our learning theoretic treatment of perceptual geometry of objects, focusing on the case of physical models of space curves. In particular, we address the question of perceptually distinguishing two possibly novel space curves based on observer's prior visual experience of physical models of curves in the environment. The Fundamental Theorem of Space Curves inspires an analogous result in perceptual geometry as follows. We apply learning theory to the statistics of a sufficiently rich collection of physical models of curves, to derive two statistically independent local functions, that we call by analogy, the curvature and torsion. This pair of invariants distinguish physical models of curves in the sense of perceptual geometry. That is, in an appropriate resolution, an observer can distinguish two perceptually identical physical models in different locations. If these pairs of functions are approximately the same for two given space curves, then after possibly some changes of viewing planes, the observer confirms the two are the same.
NASA Technical Reports Server (NTRS)
White, W. F. (Compiler)
1978-01-01
The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.
The BestFIT trial: A SMART approach to developing individualized weight loss treatments.
Sherwood, Nancy E; Butryn, Meghan L; Forman, Evan M; Almirall, Daniel; Seburg, Elisabeth M; Lauren Crain, A; Kunin-Batson, Alicia S; Hayes, Marcia G; Levy, Rona L; Jeffery, Robert W
2016-03-01
Behavioral weight loss programs help people achieve clinically meaningful weight losses (8-10% of starting body weight). Despite data showing that only half of participants achieve this goal, a "one size fits all" approach is normative. This weight loss intervention science gap calls for adaptive interventions that provide the "right treatment at the right time for the right person." Sequential Multiple Assignment Randomized Trials (SMART), use experimental design principles to answer questions for building adaptive interventions including whether, how, or when to alter treatment intensity, type, or delivery. This paper describes the rationale and design of the BestFIT study, a SMART designed to evaluate the optimal timing for intervening with sub-optimal responders to weight loss treatment and relative efficacy of two treatments that address self-regulation challenges which impede weight loss: 1) augmenting treatment with portion-controlled meals (PCM) which decrease the need for self-regulation; and 2) switching to acceptance-based behavior treatment (ABT) which boosts capacity for self-regulation. The primary aim is to evaluate the benefit of changing treatment with PCM versus ABT. The secondary aim is to evaluate the best time to intervene with sub-optimal responders. BestFIT results will lead to the empirically-supported construction of an adaptive intervention that will optimize weight loss outcomes and associated health benefits.
The BestFIT trial: A SMART approach to developing individualized weight loss treatments.
Sherwood, Nancy E; Butryn, Meghan L; Forman, Evan M; Almirall, Daniel; Seburg, Elisabeth M; Lauren Crain, A; Kunin-Batson, Alicia S; Hayes, Marcia G; Levy, Rona L; Jeffery, Robert W
2016-03-01
Behavioral weight loss programs help people achieve clinically meaningful weight losses (8-10% of starting body weight). Despite data showing that only half of participants achieve this goal, a "one size fits all" approach is normative. This weight loss intervention science gap calls for adaptive interventions that provide the "right treatment at the right time for the right person." Sequential Multiple Assignment Randomized Trials (SMART), use experimental design principles to answer questions for building adaptive interventions including whether, how, or when to alter treatment intensity, type, or delivery. This paper describes the rationale and design of the BestFIT study, a SMART designed to evaluate the optimal timing for intervening with sub-optimal responders to weight loss treatment and relative efficacy of two treatments that address self-regulation challenges which impede weight loss: 1) augmenting treatment with portion-controlled meals (PCM) which decrease the need for self-regulation; and 2) switching to acceptance-based behavior treatment (ABT) which boosts capacity for self-regulation. The primary aim is to evaluate the benefit of changing treatment with PCM versus ABT. The secondary aim is to evaluate the best time to intervene with sub-optimal responders. BestFIT results will lead to the empirically-supported construction of an adaptive intervention that will optimize weight loss outcomes and associated health benefits. PMID:26825020
Burnham, A K
2006-05-17
Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph
2011-08-01
In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques. PMID:21813354
Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph
2011-08-01
In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques.
Fit for purpose? Introducing a rational priority setting approach into a community care setting.
Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale
2016-06-20
Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.
ERIC Educational Resources Information Center
Voydanoff, Patricia
2005-01-01
Using person-environment fit theory, this article formulates a conceptual model that links work, family, and boundary-spanning demands and resources to work and family role performance and quality. Linking mechanisms include 2 dimensions of perceived work-family fit (work demands--family resources fit and family demands--work resources fit) and a…
Aldridge, Cameron L; Boyce, Mark S
2007-03-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Aldridge, C.L.; Boyce, M.S.
2007-01-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Pluto and Charon Color Light Curves from New Horizons on Approach
NASA Astrophysics Data System (ADS)
Ennico, Kimberly; Howett, C. J. A.; Olkin, C. B.; Reuter, D. C.; Buratti, B. J.; Buie, M. W.; Grundy, W. M.; Parker, A. H.; Zangari, A. M.; Binzel, R. P.; Cook, J. C.; Cruikshank, D. P.; Dalle Ore, C. M.; Earle, A. M.; Jennings, D. E.; Linscott, I. R.; Parker, J. Wm.; Protopapa, S.; Singer, K. N.; Spencer, J. R.; Stern, S. A.; Tsang, C. C. C.; Verbiscer, A. J.; Weaver, H. A.; Young, L. A.
2015-11-01
On approach to the Pluto system, New Horizons’ Ralph Instrument’s Multicolor Visible Imaging Camera (MVIC) observed Pluto and Charon, spatially separated, between April 9 and June 23, 2015. In this period, Pluto and Charon were observed to transition from unresolved objects to resolved and their integrated disk intensities were measured in four MVIC filters: blue (400-550 nm), red (540-700 nm), near-infrared (780-975 nm), and methane (860-910 nm). The measurement suite sampled the bodies over all longitudes. We will present the color rotational light curves for Pluto and Charon and compare them to previous (Buie, M. et al. 2010 AJ 139, 1117; Buratti, B.J. et al 2015 ApJ 804, L6) and concurrent ground-based BVR monitoring. We will also compare these data to color images of the encounter hemisphere taken during New Horizons’ July 14, 2015 Pluto and Charon flyby, as this data set provides a unique bridge between Pluto & Charon as viewed as astronomical targets versus the complex worlds that early data from New Horizons has revealed them to be. This work was supported by NASA’s New Horizons project.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-01
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. PMID:26388368
Introducing a Bayesian Approach to Determining Degree of Fit With Existing Rorschach Norms.
Giromini, Luciano; Viglione, Donald J; McCullaugh, Joseph
2015-01-01
This article offers a new methodological approach to investigate the degree of fit between an independent sample and 2 existing sets of norms. Specifically, with a new adaptation of a Bayesian method, we developed a user-friendly procedure to compare the mean values of a given sample to those of 2 different sets of Rorschach norms. To illustrate our technique, we used a small, U.S. community sample of 80 adults and tested whether it resembled more closely the standard Comprehensive System norms (CS 600; Exner, 2003), or a recently introduced, internationally based set of Rorschach norms (Meyer, Erdberg, & Shaffer, 2007 ). Strengths and limitations of this new statistical technique are discussed. PMID:25257792
Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.
Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis
2006-02-01
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation. PMID:16705973
Exploring Person Fit with an Approach Based on Multilevel Logistic Regression
ERIC Educational Resources Information Center
Walker, A. Adrienne; Engelhard, George, Jr.
2015-01-01
The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…
Johann, C; Garidel, P; Mennicke, L; Blume, A
1996-01-01
A simulation program using least-squares minimization was developed to calculate and fit heat capacity (cp) curves to experimental thermograms of dilute aqueous dispersions of phospholipid mixtures determined by high-sensitivity differential scanning calorimetry. We analyzed cp curves and phase diagrams of the pseudobinary aqueous lipid systems 1,2-dimyristoyl-sn-glycero-3-phosphatidylglycerol/ 1,2-dipalmitoyl-sn-glycero-3phosphatidylcholine (DMPG/DPPC) and 1,2-dimyristoyl-sn-glycero-3-phosphatidic acid/1, 2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DMPA/DPPC) at pH 7. The simulation of the cp curves is based on regular solution theory using two nonideality parameters rho g and rho l for symmetric nonideal mixing in the gel and the liquid-crystalline phases. The broadening of the cp curves owing to limited cooperativity is incorporated into the simulation by convolution of the cp curves calculated for infinite cooperativity with a broadening function derived from a simple two-state transition model with the cooperative unit size n = delta HVH/delta Hcal as an adjustable parameter. The nonideality parameters and the cooperative unit size turn out to be functions of composition. In a second step, phase diagrams were calculated and fitted to the experimental data by use of regular solution theory with four different model assumptions. The best fits were obtained with a four-parameter model based on nonsymmetric, nonideal mixing in both phases. The simulations of the phase diagrams show that the absolute values of the nonideality parameters can be changed in a certain range without large effects on the shape of the phase diagram as long as the difference of the nonideality parameters for rho g for the gel and rho l for the liquid-crystalline phase remains constant. The miscibility in DMPG/DPPC and DMPA/DPPC mixtures differs remarkably because, for DMPG/DPPC, delta rho = rho l -rho g is negative, whereas for DMPA/DPPC this difference is positive. For DMPA/DPPC, this
A Unified Conformational Selection and Induced Fit Approach to Protein-Peptide Docking
Trellet, Mikael; Melquiond, Adrien S. J.; Bonvin, Alexandre M. J. J.
2013-01-01
Protein-peptide interactions are vital for the cell. They mediate, inhibit or serve as structural components in nearly 40% of all macromolecular interactions, and are often associated with diseases, making them interesting leads for protein drug design. In recent years, large-scale technologies have enabled exhaustive studies on the peptide recognition preferences for a number of peptide-binding domain families. Yet, the paucity of data regarding their molecular binding mechanisms together with their inherent flexibility makes the structural prediction of protein-peptide interactions very challenging. This leaves flexible docking as one of the few amenable computational techniques to model these complexes. We present here an ensemble, flexible protein-peptide docking protocol that combines conformational selection and induced fit mechanisms. Starting from an ensemble of three peptide conformations (extended, a-helix, polyproline-II), flexible docking with HADDOCK generates 79.4% of high quality models for bound/unbound and 69.4% for unbound/unbound docking when tested against the largest protein-peptide complexes benchmark dataset available to date. Conformational selection at the rigid-body docking stage successfully recovers the most relevant conformation for a given protein-peptide complex and the subsequent flexible refinement further improves the interface by up to 4.5 Å interface RMSD. Cluster-based scoring of the models results in a selection of near-native solutions in the top three for ∼75% of the successfully predicted cases. This unified conformational selection and induced fit approach to protein-peptide docking should open the route to the modeling of challenging systems such as disorder-order transitions taking place upon binding, significantly expanding the applicability limit of biomolecular interaction modeling by docking. PMID:23516555
NASA Astrophysics Data System (ADS)
Harrington, Seán T.; Harrington, Joseph R.
2013-03-01
This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on
Blocker, Alexander W.; Protopapas, Pavlos; Alcock, Charles R.
2009-08-20
We present a new approach to the analysis of time symmetry in light curves, such as those in the X-ray at the center of the Scorpius X-1 occultation debate. Our method uses a new parameterization for such events (the bilogistic event profile) and provides a clear, physically relevant characterization of each event's key features. We also demonstrate a Markov chain Monte Carlo algorithm to carry out this analysis, including a novel independence chain configuration for the estimation of each event's location in the light curve. These tools are applied to the Scorpius X-1 light curves presented in Chang et al., providing additional evidence based on the time series that the events detected thus far are most likely not occultations by trans-Neptunian objects.
More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/
Carlson, K.W.; Williams, J.A.
1980-02-01
Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension.
Zhang, J George; Ho, Thuy; Callendrello, Alanna L; Clark, Robert J; Santone, Elizabeth A; Kinsman, Sarah; Xiao, Deqing; Fox, Lisa G; Einolf, Heidi J; Stresser, David M
2014-09-01
Cytochrome P450 (P450) induction is often considered a liability in drug development. Using calibration curve-based approaches, we assessed the induction parameters R3 (a term indicating the amount of P450 induction in the liver, expressed as a ratio between 0 and 1), relative induction score, Cmax/EC50, and area under the curve (AUC)/F2 (the concentration causing 2-fold increase from baseline of the dose-response curve), derived from concentration-response curves of CYP3A4 mRNA and enzyme activity data in vitro, as predictors of CYP3A4 induction potential in vivo. Plated cryopreserved human hepatocytes from three donors were treated with 20 test compounds, including several clinical inducers and noninducers of CYP3A4. After the 2-day treatment, CYP3A4 mRNA levels and testosterone 6β-hydroxylase activity were determined by real-time reverse transcription polymerase chain reaction and liquid chromatography-tandem mass spectrometry analysis, respectively. Our results demonstrated a strong and predictive relationship between the extent of midazolam AUC change in humans and the various parameters calculated from both CYP3A4 mRNA and enzyme activity. The relationships exhibited with non-midazolam in vivo probes, in aggregate, were unsatisfactory. In general, the models yielded better fits when unbound rather than total plasma Cmax was used to calculate the induction parameters, as evidenced by higher R(2) and lower root mean square error (RMSE) and geometric mean fold error. With midazolam, the R3 cut-off value of 0.9, as suggested by US Food and Drug Administration guidance, effectively categorized strong inducers but was less effective in classifying midrange or weak inducers. This study supports the use of calibration curves generated from in vitro mRNA induction response curves to predict CYP3A4 induction potential in human. With the caveat that most compounds evaluated here were not strong inhibitors of enzyme activity, testosterone 6β-hydroxylase activity was
Naegelen, Isabelle; Beaume, Nicolas; Plançon, Sébastien; Schenten, Véronique; Tschirhart, Eric J.; Bréchard, Sabrina
2015-01-01
Neutrophils participate in the maintenance of host integrity by releasing various cytotoxic proteins during degranulation. Due to recent advances, a major role has been attributed to neutrophil-derived cytokine secretion in the initiation, exacerbation, and resolution of inflammatory responses. Because the release of neutrophil-derived products orchestrates the action of other immune cells at the infection site and, thus, can contribute to the development of chronic inflammatory diseases, we aimed to investigate in more detail the spatiotemporal regulation of neutrophil-mediated release mechanisms of proinflammatory mediators. Purified human neutrophils were stimulated for different time points with lipopolysaccharide. Cells and supernatants were analyzed by flow cytometry techniques and used to establish secretion profiles of granules and cytokines. To analyze the link between cytokine release and degranulation time series, we propose an original strategy based on linear fitting, which may be used as a guideline, to (i) define the relationship of granule proteins and cytokines secreted to the inflammatory site and (ii) investigate the spatial regulation of neutrophil cytokine release. The model approach presented here aims to predict the correlation between neutrophil-derived cytokine secretion and degranulation and may easily be extrapolated to investigate the relationship between other types of time series of functional processes. PMID:26579547
Gender and Marital Satisfaction Early in Marriage: A Growth Curve Approach
ERIC Educational Resources Information Center
Kurdek, Lawrence A.
2005-01-01
The purpose of this study is to assess differences between husbands and wives (N= 526 couples at the first assessment) on (a) growth curves over the first 4 years of marriage for psychological distress, marriage-specific appraisals, spousal interactions, social support, and marital satisfaction; (b) the strength of intraspouse links and…
R-Curve Approach to Describe the Fracture Resistance of Tool Steels
NASA Astrophysics Data System (ADS)
Picas, Ingrid; Casellas, Daniel; Llanes, Luis
2016-06-01
This work addresses the events involved in the fracture of tool steels, aiming to understand the effect of primary carbides, inclusions, and the metallic matrix on their effective fracture toughness and strength. Microstructurally different steels were investigated. It is found that cracks nucleate on carbides or inclusions at stress values lower than the fracture resistance. It is experimentally evidenced that such cracks exhibit an increasing growth resistance as they progressively extend, i.e., R-curve behavior. Ingot cast steels present a rising R-curve, which implies that the effective toughness developed by small cracks is lower than that determined with long artificial cracks. On the other hand, cracks grow steadily in the powder metallurgy tool steel, yielding as a result a flat R-curve. Accordingly, effective toughness for this material is mostly independent of the crack size. Thus, differences in fracture toughness values measured using short and long cracks must be considered when assessing fracture resistance of tool steels, especially when tool performance is controlled by short cracks. Hence, material selection for tools or development of new steel grades should take into consideration R-curve concepts, in order to avoid unexpected tool failures or to optimize microstructural design of tool steels, respectively.
Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test
ERIC Educational Resources Information Center
Garn, Alex; Sun, Haichun
2009-01-01
The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…
ERIC Educational Resources Information Center
Pargament, Kenneth I.; Sweeney, Patrick J.
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…
Fabijańska, Anna
2016-06-01
This paper considers the problem of an automatic quantification of DCE-MRI curve shape patterns. In particular, the semi-quantitative approach which classifies DCE time-intensity curves into clusters representing the tree main shape patterns is proposed. The approach combines heuristic rules with the naive Bayes classifier. In particular, the descriptive parameters are firstly derived from pixel-by-pixel analysis of the DCE time intensity curves and then used to recognise the curves which without a doubt represent the three main shape patterns. These curves are next used to train the naive Bayes classifier intended to classify the remaining curves within the dataset. Results of applying the proposed approach to the DCE-MRI scans of patients with prostate cancer are presented and discussed. Additionally, the overall performance of the approach is estimated through the comparison with the ground truth results provided by the expert. PMID:27107675
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
Han, Yunkun; Han, Zhanwen E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
A Person-Centered Approach to P-E Fit Questions Using a Multiple-Trait Model.
ERIC Educational Resources Information Center
De Fruyt, Filip
2002-01-01
Employed college students (n=401) completed the Self-Directed Search and NEO Personality Inventory-Revised. Person-environment fit across Holland's six personality types predicted job satisfaction and skill development. Five-Factor Model traits significantly predicted intrinsic career outcomes. Use of the five-factor, person-centered approach to…
Curve aligning approach for gait authentication based on a wearable accelerometer.
Sun, Hu; Yuao, Tao
2012-06-01
Gait authentication based on a wearable accelerometer is a novel biometric which can be used for identity identification, medical rehabilitation and early detection of neurological disorders. The method for matching gait patterns tells heavily on authentication performances. In this paper, curve aligning is introduced as a new method for matching gait patterns and it is compared with correlation and dynamic time warping (DTW). A support vector machine (SVM) is proposed to fuse pattern-matching methods in a decision level. Accelerations collected from ankles of 22 walking subjects are processed for authentications in our experiments. The fusion of curve aligning with backward-forward accelerations and DTW with vertical accelerations promotes authentication performances substantially and consistently. This fusion algorithm is tested repeatedly. Its mean and standard deviation of equal error rates are 0.794% and 0.696%, respectively, whereas among all presented non-fusion algorithms, the best one shows an EER of 3.03%. PMID:22621972
3D Modeling of Spectra and Light Curves of Hot Jupiters with PHOENIX; a First Approach
NASA Astrophysics Data System (ADS)
Jiménez-Torres, J. J.
2016-04-01
A detailed global circulation model was used to feed the PHOENIX code and calculate 3D spectra and light curves of hot Jupiters. Cloud free and dusty radiative fluxes for the planet HD179949b were modeled to show differences between them. The PHOENIX simulations can explain the broad features of the observed 8 μm light curves, including the fact that the planet-star flux ratio peaks before the secondary eclipse. The PHOENIX reflection spectrum matches the Spitzer secondary-eclipse depth at 3.6 μm and underpredicts eclipse depths at 4.5, 5.8 and 8.0 μm. These discrepancies result from the chemical composition and suggest the incorporation of different metallicities in future studies.
ROC-curve approach for determining the detection limit of a field chemical sensor.
Fraga, Carlos G; Melville, Angela M; Wright, Bob W
2007-03-01
The detection limit of a field chemical sensor under realistic operating conditions is determined by receiver operator characteristic (ROC) curves. The chemical sensor is an ion mobility spectrometry (IMS) device used to detect a chemical marker in diesel fuel. The detection limit is the lowest concentration of the marker in diesel fuel that obtains the desired true-positive probability (TPP) and false-positive probability (FPP). A TPP of 0.90 and a FPP of 0.10 were selected as acceptable levels for the field sensor in this study. The detection limit under realistic operating conditions is found to be between 2 to 4 ppm (w/w). The upper value is the detection limit under challenging conditions. The ROC-based detection limit is very reliable because it is determined from multiple and repetitive sensor analyses under realistic circumstances. ROC curves also clearly illustrate and gauge the effects data preprocessing and sampling environments have on the sensor's detection limit.
NASA Astrophysics Data System (ADS)
Zenzerovic, I.; Kropp, W.; Pieringer, A.
2016-08-01
Curve squeal is a strong tonal sound that may arise when a railway vehicle negotiates a tight curve. In contrast to frequency-domain models, time-domain models are able to capture the nonlinear and transient nature of curve squeal. However, these models are computationally expensive due to requirements for fine spatial and time discretization. In this paper, a computationally efficient engineering model for curve squeal in the time-domain is proposed. It is based on a steady-state point-contact model for the tangential wheel/rail contact and a Green's functions approach for wheel and rail dynamics. The squeal model also includes a simple model of sound radiation from the railway wheel from the literature. A validation of the tangential point-contact model against Kalker's transient variational contact model reveals that the point-contact model performs well within the squeal model up to at least 5 kHz. The proposed squeal model is applied to investigate the influence of lateral creepage, friction and wheel/rail contact position on squeal occurrence and amplitude. The study indicates a significant influence of the wheel/rail contact position on squeal frequencies and amplitudes. Friction and lateral creepage show an influence on squeal occurrence and amplitudes, but this is only secondary to the influence of the contact position.
Schumacher, Jonathan A; Scott Reading, N; Szankasi, Philippe; Matynia, Anna P; Kelley, Todd W
2015-08-01
Acute myeloid leukemia patients with recurrent cytogenetic abnormalities including inv(16);CBFB-MYH11 and t(15;17);PML-RARA may be assessed by monitoring the levels of the corresponding abnormal fusion transcripts by quantitative reverse transcription-PCR (qRT-PCR). Such testing is important for evaluating the response to therapy and for the detection of early relapse. Existing qRT-PCR methods are well established and in widespread use in clinical laboratories but they are laborious and require the generation of standard curves. Here, we describe a new method to quantitate fusion transcripts in acute myeloid leukemia by qRT-PCR without the need for standard curves. Our approach uses a plasmid calibrator containing both a fusion transcript sequence and a reference gene sequence, representing a perfect normalized copy number (fusion transcript copy number/reference gene transcript copy number; NCN) of 1.0. The NCN of patient specimens can be calculated relative to that of the single plasmid calibrator using experimentally derived PCR efficiency values. We compared the data obtained using the plasmid calibrator method to commercially available assays using standard curves and found that the results obtained by both methods are comparable over a broad range of values with similar sensitivities. Our method has the advantage of simplicity and is therefore lower in cost and may be less subject to errors that may be introduced during the generation of standard curves.
Phase resetting for a network of oscillators via phase response curve approach.
Efimov, D
2015-02-01
The problem of phase regulation for a population of oscillating systems is considered. The proposed control strategy is based on a phase response curve (PRC) model of an oscillator (the first-order reduced model obtained for linearized system and inputs with infinitesimal amplitude). It is proven that the control provides phase resetting for the original nonlinear system. Next, the problem of phase resetting for a network of oscillators is considered when applying a common control input. Performance of the obtained solutions is demonstrated via computer simulation for three different models of circadian/neural oscillators. PMID:25246107
Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S
2010-07-12
A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. PMID:20630175
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-04-10
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority ({approx}90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
Effect of irrigation on the Budyko curve: a process-based stochastic approach
NASA Astrophysics Data System (ADS)
Vico, Giulia; Destouni, Georgia
2015-04-01
Currently, 40% of food production is provided by irrigated agriculture. Irrigation ensures higher and less variable yields, but such water input alters the balance of transpiration and other losses from the soil. Thus, accounting for the impact of irrigation is crucial for the understanding of the local water balance. A probabilistic model of the soil water balance is employed to explore the effects of different irrigation strategies within the Budyko framework. Shifts in the Budyko curve are explained in a mechanistic way. At the field level and assuming unlimited irrigation water, irrigation shifts the Budyko curve upward towards the upper limit imposed by energy availability, even in dry climates. At the watershed scale and assuming that irrigation water is obtained from sources within the same watershed, the application of irrigation over a fraction of the watershed area allows a more efficient use of water resources made available through precipitation. In this case, however, mean transpiration remains upper-bounded by rainfall over the whole watershed.
Srivastava, Aneesh; Sureka, Sanjoy Kumar; Vashishtha, Saurabh; Agarwal, Shikhar; Ansari, Md Saleh; Kumar, Manoj
2016-01-01
CONTEXT: The retroperitoneoscopic or retroperitoneal (RP) surgical approach has not become as popular as the transperitoneal (TP) one due to the steeper learning curve. AIMS: Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS) performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. SETTINGS AND DESIGN: This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. PATIENTS AND METHODS: Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC). RESULTS: RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP); 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5%) had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. CONCLUSION: Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve. PMID:27073300
Global-fit approach to the analysis of limb-scanning atmospheric measurements
NASA Astrophysics Data System (ADS)
Carlotti, Massimo
1988-08-01
A method for the retrieval of concentration profiles of atmospheric constituents from spectra, recorded by balloon-borne spectrometers with the limb-scanning technique, is presented. The method uses a nonlinear least-squares fit procedure to fit simultaneously the whole concentration profile on a limb-scanning sequence of spectra. A use in interferometric measurements of the stratospheric emission is shown and a comparison is discussed with the results obtained from the analysis of the same data set, by using the onion-peeling method in which the error propagation, over concentrations, is taken into account. With the global-fit, error bars smaller than with the onion-peeling analysis are obtained. Computational details are also discussed.
Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach
Hawkley, Louise C.; Capitanio, John P.
2015-01-01
Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship—as indexed by perceived social isolation (i.e. loneliness)—with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary–adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a ‘lonely’ phenotype in social species, and its consequences for health and fitness. PMID:25870400
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account.
Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José
2016-10-01
In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns.
NASA Astrophysics Data System (ADS)
Guo, Feng; Zhang, Hong; Hu, Hai-Quan; Cheng, Xin-Lu; Zhang, Li-Yan
2015-11-01
We investigate the Hugoniot curve, shock-particle velocity relations, and Chapman-Jouguet conditions of the hot dense system through molecular dynamics (MD) simulations. The detailed pathways from crystal nitromethane to reacted state by shock compression are simulated. The phase transition of N2 and CO mixture is found at about 10 GPa, and the main reason is that the dissociation of the C-O bond and the formation of C-C bond start at 10.0-11.0 GPa. The unreacted state simulations of nitromethane are consistent with shock Hugoniot data. The complete pathway from unreacted to reacted state is discussed. Through chemical species analysis, we find that the C-N bond breaking is the main event of the shock-induced nitromethane decomposition. Project supported by the National Natural Science Foundation of China (Grant No. 11374217) and the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014BQ008).
Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach
ERIC Educational Resources Information Center
Perkmen, Serkan; Cevik, Beste; Alkan, Mahir
2012-01-01
Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…
ERIC Educational Resources Information Center
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-01-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…
Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach
ERIC Educational Resources Information Center
Bjerke, Wendy
2013-01-01
The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-06-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The aim of this study was to perform a visual analysis of this relationship with self-organizing maps and to monitor the subject's evolution during the 4 years of secondary school. Four hundred and forty-four students participated in the study. The physical activity and physical fitness of the participants were measured, and the participants' grade point averages were obtained from the five participant institutions. Four main clusters representing two primary student profiles with few differences between boys and girls were observed. The clustering demonstrated that students with higher energy expenditure and better physical fitness exhibited lower body mass index (BMI) and higher academic performance, whereas those adolescents with lower energy expenditure exhibited worse physical fitness, higher BMI and lower academic performance. With respect to the evolution of the students during the 4 years, ∼25% of the students originally clustered in a negative profile moved to a positive profile, and there was no movement in the opposite direction. PMID:25953972
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
Lu, Yehu; Song, Guowen; Li, Jun
2014-11-01
The garment fit played an important role in protective performance, comfort and mobility. The purpose of this study is to quantify the air gap to quantitatively characterize a three-dimensional (3-D) garment fit using a 3-D body scanning technique. A method for processing of scanned data was developed to investigate the air gap size and distribution between the clothing and human body. The mesh model formed from nude and clothed body was aligned, superimposed and sectioned using Rapidform software. The air gap size and distribution over the body surface were analyzed. The total air volume was also calculated. The effects of fabric properties and garment size on air gap distribution were explored. The results indicated that average air gap of the fit clothing was around 25-30 mm and the overall air gap distribution was similar. The air gap was unevenly distributed over the body and it was strongly associated with the body parts, fabric properties and garment size. The research will help understand the overall clothing fit and its association with protection, thermal and movement comfort, and provide guidelines for clothing engineers to improve thermal performance and reduce physiological burden.
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Lee, M. G.
1985-01-01
The navigation and flight director guidance systems implemented in the NASA/FAA helicopter microwave landing system (MLS) curved approach flight test program is described. Flight test were conducted at the U.S. Navy's Crows Landing facility, using the NASA Ames UH-lH helicopter equipped with the V/STOLAND avionics system. The purpose of these tests was to investigate the feasibility of flying complex, curved and descending approaches to a landing using MLS flight director guidance. A description of the navigation aids used, the avionics system, cockpit instrumentation and on-board navigation equipment used for the flight test is provided. Three generic reference flight paths were developed and flown during the test. They were as follows: U-Turn, S-turn and Straight-In flight profiles. These profiles and their geometries are described in detail. A 3-cue flight director was implemented on the helicopter. A description of the formulation and implementation of the flight director laws is also presented. Performance data and analysis is presented for one pilot conducting the flight director approaches.
Hwang, Beom Seuk; Chen, Zhen
2015-01-01
In estimating ROC curves of multiple tests, some a priori constraints may exist, either between the healthy and diseased populations within a test or between tests within a population. In this paper, we proposed an integrated modeling approach for ROC curves that jointly accounts for stochastic and variability orders. The stochastic order constrains the distributional centers of the diseased and healthy populations within a test, while the variability order constrains the distributional spreads of the tests within each of the populations. Under a Bayesian nonparametric framework, we used features of the Dirichlet process mixture to incorporate these order constraints in a natural way. We applied the proposed approach to data from the Physician Reliability Study that investigated the accuracy of diagnosing endometriosis using different clinical information. To address the issue of no gold standard in the real data, we used a sensitivity analysis approach that exploited diagnosis from a panel of experts. To demonstrate the performance of the methodology, we conducted simulation studies with varying sample sizes, distributional assumptions and order constraints. Supplementary materials for this article are available online. PMID:26839441
NASA Astrophysics Data System (ADS)
Askari, H.; Esmailzadeh, E.; Barari, A.
2015-09-01
A novel procedure for the nonlinear vibration analysis of curved beam is presented. The Non-Uniform Rational B-Spline (NURBS) is combined with the Euler-Bernoulli beam theory to define the curvature of the structure. The governing equation of motion and the general frequency formula, using the NURBS variables, is applicable for any type of curvatures, is developed. The Galerkin procedure is implemented to obtain the nonlinear ordinary differential equation of curved system and the multiple time scales method is utilized to find the corresponding frequency responses. As a case study, the nonlinear vibration of carbon nanotubes with different shapes of curvature is investigated. The effect of oscillation amplitude and the waviness on the natural frequency of the curved nanotube is evaluated and the primary resonance case of system with respect to the variations of different parameters is discussed. For the sake of comparison of the results obtained with those from the molecular dynamic simulation, the natural frequencies evaluated from the proposed approach are compared with those reported in literature for few types of carbon nanotube simulation.
Curved descending landing approach guidance and control. M.S. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Crawford, D. J.
1974-01-01
Linear optimal regulator theory is applied to a nonlinear simulation of a transport aircraft performing a helical landing approach. A closed form expression for the quasi-steady nominal flight path is presented along with the method for determining the corresponding constant nominal control inputs. The Jacobian matrices and the weighting matrices in the cost functional are time varying. A method of solving for the optimal feedback gains is reviewed. The control system is tested on several alternative landing approaches using both three and six degree flight path angles. On each landing approach, the aircraft was subjected to large random initial state errors and to randomly directed crosswinds. The system was also tested for sensitivity to changes in the parameters of the aircraft and of the atmosphere. Performance of the optimal controller on all the three degree approaches was very good, and the control system proved to be reasonably insensitive to parametric uncertainties.
Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study
NASA Technical Reports Server (NTRS)
Scanlon, Charles H.
1987-01-01
A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.
Omarjee, Saleha; Walker, Bruce D.; Chakraborty, Arup; Ndung'u, Thumbi
2014-01-01
Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model), generalizing our previous approach (Ising model) that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these) predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = −0.74, p = 3.6×10−6) are strongly correlated, and this was further strengthened in the regularized Ising model (r = −0.83, p = 3.7×10−12). Performance of the Potts model (r = −0.73, p = 9.7×10−9) was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion
A unified approach for a posteriori high-order curved mesh generation using solid mechanics
NASA Astrophysics Data System (ADS)
Poya, Roman; Sevilla, Ruben; Gil, Antonio J.
2016-09-01
The paper presents a unified approach for the a posteriori generation of arbitrary high-order curvilinear meshes via a solid mechanics analogy. The approach encompasses a variety of methodologies, ranging from the popular incremental linear elastic approach to very sophisticated non-linear elasticity. In addition, an intermediate consistent incrementally linearised approach is also presented and applied for the first time in this context. Utilising a consistent derivation from energy principles, a theoretical comparison of the various approaches is presented which enables a detailed discussion regarding the material characterisation (calibration) employed for the different solid mechanics formulations. Five independent quality measures are proposed and their relations with existing quality indicators, used in the context of a posteriori mesh generation, are discussed. Finally, a comprehensive range of numerical examples, both in two and three dimensions, including challenging geometries of interest to the solids, fluids and electromagnetics communities, are shown in order to illustrate and thoroughly compare the performance of the different methodologies. This comparison considers the influence of material parameters and number of load increments on the quality of the generated high-order mesh, overall computational cost and, crucially, the approximation properties of the resulting mesh when considering an isoparametric finite element formulation.
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting. PMID:27566660
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting.
Jiang, Bin; Guo, Hua
2014-07-21
The permutation invariant polynomial-neural network (PIP-NN) method for constructing highly accurate potential energy surfaces (PESs) for gas phase molecules is extended to molecule-surface interaction PESs. The symmetry adaptation in the NN fitting of a PES is achieved by employing as the input symmetry functions that fulfill both the translational symmetry of the surface and permutation symmetry of the molecule. These symmetry functions are low-order PIPs of the primitive symmetry functions containing the surface periodic symmetry. It is stressed that permutationally invariant cross terms are needed to avoid oversymmetrization. The accuracy and efficiency are demonstrated in fitting both a model PES for the H2 + Cu(111) system and density functional theory points for the H2 + Ag(111) system.
NASA Astrophysics Data System (ADS)
Jiang, Bin; Guo, Hua
2014-07-01
The permutation invariant polynomial-neural network (PIP-NN) method for constructing highly accurate potential energy surfaces (PESs) for gas phase molecules is extended to molecule-surface interaction PESs. The symmetry adaptation in the NN fitting of a PES is achieved by employing as the input symmetry functions that fulfill both the translational symmetry of the surface and permutation symmetry of the molecule. These symmetry functions are low-order PIPs of the primitive symmetry functions containing the surface periodic symmetry. It is stressed that permutationally invariant cross terms are needed to avoid oversymmetrization. The accuracy and efficiency are demonstrated in fitting both a model PES for the H2 + Cu(111) system and density functional theory points for the H2 + Ag(111) system.
Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene
2011-10-28
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Craniodental features in male Mandrillus may signal size and fitness: an allometric approach.
Klopp, Emily B
2012-04-01
According to a hypothesis in the broader mammalian literature, secondary sexual characteristics that have evolved to signal fitness and size to other conspecifics should exhibit positive allometry across adult males within a species. Here this hypothesis is tested in the genus Mandrillus. The overbuilding of bony features in larger individuals necessitates a functional explanation as bone is metabolically expensive to produce and maintain. Canine size and size of the maxillary ridge are scaled against a body size surrogate in intraspecific samples of male Mandrillus sphinx (mandrills) and Mandrillus leucophaeus (drills). Areal dimensions are weighted more heavily as they represent the size of a feature as it is viewed by individuals. Measures of the maxillary ridge and canine tooth are significantly correlated with the size surrogate and scale with positive allometry in both samples supporting the hypothesis that these features function to advertise a male's body size and fitness to other males competing for mates and potential discerning females. This is the first study in primates to test for intraspecific positive allometric scaling of bony facial features in adult males based on a theory of fitness signaling and sexual selection. PMID:22328467
Craniodental features in male Mandrillus may signal size and fitness: an allometric approach.
Klopp, Emily B
2012-04-01
According to a hypothesis in the broader mammalian literature, secondary sexual characteristics that have evolved to signal fitness and size to other conspecifics should exhibit positive allometry across adult males within a species. Here this hypothesis is tested in the genus Mandrillus. The overbuilding of bony features in larger individuals necessitates a functional explanation as bone is metabolically expensive to produce and maintain. Canine size and size of the maxillary ridge are scaled against a body size surrogate in intraspecific samples of male Mandrillus sphinx (mandrills) and Mandrillus leucophaeus (drills). Areal dimensions are weighted more heavily as they represent the size of a feature as it is viewed by individuals. Measures of the maxillary ridge and canine tooth are significantly correlated with the size surrogate and scale with positive allometry in both samples supporting the hypothesis that these features function to advertise a male's body size and fitness to other males competing for mates and potential discerning females. This is the first study in primates to test for intraspecific positive allometric scaling of bony facial features in adult males based on a theory of fitness signaling and sexual selection.
Zhao, Gang
2010-01-01
A universal self-adaptive time-varying function of extracellular concentration history during osmotic shift for measuring cell membrane permeability was presented in this study. The feasibility and accuracy of the assumed function were verified based on the experimental data obtained from the microperfusion chamber method. It was found that the assumed function could always give out the very satisfactory coefficient of determination, and there were no significant differences between the hydraulic conductivity values fitted using the laser interferometer measured extracellular concentration profile and the predicted one by the assumed piecewise function (student's t test, p > 0.05). Due to the adaptive feature of the assumed function for the concentration of extracellular solution, the function was suggest to be used for all the similar studies for measurement of cell membrane permeability by osmotic shift. PMID:20919457
Melman, Wietse P R; Mollen, Bas P; Kollen, Boudewijn J; Verheyen, Cees C P M
2015-01-01
The direct anterior approach (DAA) in supine position for hip arthroplasty has been reported to suffer from high complication rates initially. The DAA with the patient in lateral decubitus position is believed to provide better visibility and especially femoral accessibility with potential fewer complications.The first cohort of total hip prostheses, which were implanted by a single surgeon using DAA in lateral decubitus position more than 1 year ago, was analysed retrospectively.In total 182 hip prostheses (172 patients) were analysed. Three consecutive time periods based on equal number of surgical procedures were compared. The technical complication rate and operating time improved significantly between the 3 consecutive groups. The 1 year infection rate was 0.5% and survival rate of 98.9%.This is the first series of patients that were subjected to a hip replacement based on the direct anterior approach in a lateral decubitus position in which decreasing complication rates suggest the presence of a learning curve in surgeons conducting this type of surgery. Initial complication rate was high but decreased significantly in time and was acceptable certainly in the third group of our cohort. An unacceptable complication rate with the straight uncemented stem forced us to discontinue this configuration after only 7 surgical procedures. Complication, infection and revision rates were acceptable for the all cemented hip replacements using a curved anatomical stem. PMID:25684251
NASA Technical Reports Server (NTRS)
Hindson, W. S.; Hardy, G. H.
1978-01-01
The control, display, and procedural features are described for a flight experiment conducted to assess the feasibility of piloted STOL approaches along predefined, steep, curved, and decelerating approach profiles. It was found to be particularly important to assist the pilot through use of the flight director computing capability with the lower frequency control-related tasks, such as those associated with monitoring and adjusting configuration trim as influenced by atmospheric effects, and preventing the system from exceeding powerplant and SAS authority limitations. Many of the technical and pilot related issues identified in the course of this flight investigation are representative of similarly demanding operational tasks that are thought to be possible only through the use of sophisticated control and display systems.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas
2003-06-01
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André
2015-07-01
The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.
Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André
2015-07-01
The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany. PMID:26061620
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
NASA Astrophysics Data System (ADS)
Perevalov, V. I.; Lyulin, O. M.; Jacquemart, D.; Claveau, C.; Teffo, J.-L.; Dana, V.; Mandin, J.-Y.; Valentin, A.
2003-04-01
The method of effective operators has been applied to the global fitting of line intensities of the acetylene molecule in the middle infrared. Simultaneous fittings of recently observed line intensities in the cold and hot bands lying in the 13.6, 7.8, and 5 μm regions have been performed. The eigenfunctions of the effective Hamiltonian developed for the global treatment of the vibration-rotation line positions of acetylene [O.M. Lyulin, V.I. Perevalov, S.A. Tashkun, J.-L. Teffo, in: Leonid N. Sinitsa (Ed.), 13th Symposium and School on High Resolution Molecular Spectroscopy, Proceedings of SPIE, vol. 4063, 2000, pp. 126-133] have been used in the calculations. The sets of effective dipole moment parameters obtained reproduce the observed line intensities within the experimental accuracy. The importance of l-type resonance, responsible for some large differences between intensities of the same lines in subbands having opposite parities, is exhibited and discussed.
Computer-aided fit testing: an approach for examining the user/equipment interface
NASA Astrophysics Data System (ADS)
Corner, Brian D.; Beecher, Robert M.; Paquette, Steven
1997-03-01
Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
An Epistemology of Leadership Perspective: Examining the Fit for a Critical Pragmatic Approach
ERIC Educational Resources Information Center
Bourgeois, Nichole
2011-01-01
In this article the author examines the meaning of epistemology in relation to educational leadership. Argued is the position that generalizing the intent and tendencies of modernistic and postmodernistic approaches to educational reform and leadership preparation makes space for a critical pragmatic approach. Critical pragmatists as…
Inward leakage variability between respirator fit test panels - Part II. Probabilistic approach.
Liu, Yuewei; Zhuang, Ziqing; Coffey, Christopher C; Rengasamy, Samy; Niezgoda, George
2016-08-01
This study aimed to quantify the variability between different anthropometric panels in determining the inward leakage (IL) of N95 filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs). We enrolled 144 experienced and non-experienced users as subjects in this study. Each subject was assigned five randomly selected FFRs and five EHRs, and performed quantitative fit tests to measure IL. Based on the NIOSH bivariate fit test panel, we randomly sampled 10,000 pairs of anthropometric 35 and 25 member panels without replacement from the 144 study subjects. For each pair of the sampled panels, a Chi-Square test was used to test the hypothesis that the passing rates for the two panels were not different. The probability of passing the IL test for each respirator was also determined from the 20,000 panels and by using binomial calculation. We also randomly sampled 500,000 panels with replacement to estimate the coefficient of variation (CV) for inter-panel variability. For both 35 and 25 member panels, the probability that passing rates were not significantly different between two randomly sampled pairs of panels was higher than 95% for all respirators. All efficient (passing rate ≥80%) and inefficient (passing rate ≤60%) respirators yielded consistent results (probability >90%) for two randomly sampled panels. Somewhat efficient respirators (passing rate between 60% and 80%) yielded inconsistent results. The passing probabilities and error rates were found to be significantly different between the simulation and binomial calculation. The CV for the 35-member panel was 16.7%, which was slightly lower than that for the 25-member panel (19.8%). Our results suggested that IL inter-panel variability exists, but is relatively small. The variability may be affected by passing level and passing rate. Facial dimension-based fit test panel stratification was also found to have significant impact on inter-panel variability, i.e., it can reduce alpha
Li, Jun; Jiang, Bin; Guo, Hua
2013-11-28
A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.
Li, Jun; Jiang, Bin; Guo, Hua
2013-11-28
A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
AGNfitter: SED-fitting code for AGN and galaxies from a MCMC approach
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph F.; Hogg, David W.
2016-07-01
AGNfitter is a fully Bayesian MCMC method to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGN) and galaxies from the sub-mm to the UV; it enables robust disentanglement of the physical processes responsible for the emission of sources. Written in Python, AGNfitter makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the nuclear and host galaxy emission simultaneously. The model consists of four physical emission components: an accretion disk, a torus of AGN heated dust, stellar populations, and cold dust in star forming regions. AGNfitter determines the posterior distributions of numerous parameters that govern the physics of AGN with a fully Bayesian treatment of errors and parameter degeneracies, allowing one to infer integrated luminosities, dust attenuation parameters, stellar masses, and star formation rates.
AGNfitter: An MCMC Approach to Fitting SEDs of AGN and galaxies
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph; Hogg, David W.
2016-08-01
I will present AGNfitter: a tool to robustly disentangle the physical processes responsible for the emission of active galactic nuclei (AGN). AGNfitter is the first open-source algorithm based on a Markov Chain Monte Carlo method to fit the spectral energy distributions of AGN from the sub-mm to the UV. The code makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the host galaxy and the nuclear emission simultaneously. The model consists in four physical components comprising stellar populations, cold dust distributions in star forming regions, accretion disk, and hot dust torus emissions. AGNfitter is well suited to infer numerous parameters that rule the physics of AGN with a proper handling of their confidence levels through the sampling and assumptions-free calculation of their posterior probability distributions. The resulting parameters are, among many others, accretion disk luminosities, dust attenuation for both galaxy and accretion disk, stellar masses and star formation rates. We describe the relevance of this fitting machinery, the technicalities of the code, and show its capabilities in the context of unobscured and obscured AGN. The analyzed data comprehend a sample of 714 X-ray selected AGN of the XMM-COSMOS survey, spectroscopically classified as Type1 and Type2 sources by their optical emission lines. The inference of variate independent obscuration parameters allows AGNfitter to find a classification strategy with great agreement with the spectroscopical classification for ˜ 86% and ˜ 70% for the Type1 and Type2 AGNs respectively. The variety and large number of physical properties inferred by AGNfitter has the potential of contributing to a wide scope of science-cases related to both active and quiescent galaxies studies.
Analytical approach to the recovery of short fluorescence lifetimes from fluorescence decay curves.
Bajzer, Z; Zelić, A; Prendergast, F G
1995-01-01
Considerable effort in instrument development has made possible detection of picosecond fluorescence lifetimes by time-correlated single-photon counting. In particular, efforts have been made to narrow markedly the instrument response function (IRF). Less attention has been paid to analytical methods, especially to problem of discretization of the convolution integral, on which the detection and quantification of short lifetimes critically depends. We show that better discretization methods can yield acceptable results for short lifetimes even with an IRF several times wider than necessary for the standard discretization based on linear approximation (LA). A general approach to discretization, also suitable for nonexponential models, is developed. The zero-time shift is explicitly included. Using simulations, we compared LA, quadratic, and cubic approximations. The latter two proved much better for detection of short lifetimes and, in that respect, they do not differ except when the zero-time shift exceeds two channels, when one can benefit from using the cubic approximation. We showed that for LA in some cases narrowing the IRF beyond FWHM = 150 ps is actually counterproductive. This is not so for quadratic and cubic approximations, which we recommend for general use. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 5 PMID:8519969
Kim, Young S; Lo, Celia C
2016-10-01
The present study investigates how adolescents' experiences of violent victimization exert short- and mid-term effects on their involvement in delinquency. The study compares and contrasts delinquency trajectories of youths whose experiences of violent victimization differ. A multilevel growth-curve modeling approach is applied to analyze data from five waves of the National Youth Survey. The results show that, although delinquency involvement increases as youths experience violent victimization, delinquency trajectories differ with the type of violent victimization, specifically, parental versus non-parental victimization. Violent victimization by parents produced a sharp initial decline in delinquency (short-term effect) followed by a rapid acceleration (mid-term effect). In turn, non-parental violence showed a stable trend over time. The findings have important implications for prevention and treatment services.
Kim, Young S; Lo, Celia C
2016-10-01
The present study investigates how adolescents' experiences of violent victimization exert short- and mid-term effects on their involvement in delinquency. The study compares and contrasts delinquency trajectories of youths whose experiences of violent victimization differ. A multilevel growth-curve modeling approach is applied to analyze data from five waves of the National Youth Survey. The results show that, although delinquency involvement increases as youths experience violent victimization, delinquency trajectories differ with the type of violent victimization, specifically, parental versus non-parental victimization. Violent victimization by parents produced a sharp initial decline in delinquency (short-term effect) followed by a rapid acceleration (mid-term effect). In turn, non-parental violence showed a stable trend over time. The findings have important implications for prevention and treatment services. PMID:25888502
Inward Leakage Variability between Respirator Fit Test Panels – Part I. Deterministic Approach
Zhuang, Ziqing; Liu, Yuewei; Coffey, Christopher C.; Miller, Colleen; Szalajda, Jonathan
2015-01-01
Inter-panel variability has never been investigated. The objective of this study was to determine the variability between different anthropometric panels used to determine the inward leakage (IL) of N95 filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs). A total of 144 subjects, who were both experienced and non-experienced N95 FFR users, were recruited. Five N95 FFRs and five N95 EHRs were randomly selected from among those models tested previously in our laboratory. The PortaCount Pro+ (without N95-Companion) was used to measure IL of the ambient particles with a detectable size range of 0.02 to 1 μm. The Occupational Safety and Health Administration standard fit test exercises were used for this study. IL test were performed for each subject using each of the 10 respirators. Each respirator/subject combination was tested in duplicate, resulting in a total 20 IL tests for each subject. Three 35-member panels were randomly selected without replacement from the 144 study subjects stratified by the National Institute for Occupational Safety and Health bivariate panel cell for conducting statistical analyses. The geometric mean (GM) IL values for all 10 studied respirators were not significantly different among the three randomly selected 35-member panels. Passing rate was not significantly different among the three panels for all respirators combined or by each model. This was true for all IL pass/fail levels of 1%, 2%, and 5%. Using 26 or more subjects to pass the IL test, all three panels had consistent passing/failing results for pass/fail levels of 1% and 5%. Some disagreement was observed for the 2% pass/fail level. Inter-panel variability exists, but it is small relative to the other sources of variation in fit testing data. The concern about inter-panel variability and other types of variability can be alleviated by properly selecting: pass/fail level (IL 1–5%); panel size (e.g., 25 or 35); and minimum number of subjects
Inward Leakage Variability between Respirator Fit Test Panels - Part I. Deterministic Approach.
Zhuang, Ziqing; Liu, Yuewei; Coffey, Christopher C; Miller, Colleen; Szalajda, Jonathan
2015-01-01
Inter-panel variability has never been investigated. The objective of this study was to determine the variability between different anthropometric panels used to determine the inward leakage (IL) of N95 filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs). A total of 144 subjects, who were both experienced and non-experienced N95 FFR users, were recruited. Five N95 FFRs and five N95 EHRs were randomly selected from among those models tested previously in our laboratory. The PortaCount Pro+ (without N95-Companion) was used to measure IL of the ambient particles with a detectable size range of 0.02 to 1 μm. The Occupational Safety and Health Administration standard fit test exercises were used for this study. IL test were performed for each subject using each of the 10 respirators. Each respirator/subject combination was tested in duplicate, resulting in a total 20 IL tests for each subject. Three 35-member panels were randomly selected without replacement from the 144 study subjects stratified by the National Institute for Occupational Safety and Health bivariate panel cell for conducting statistical analyses. The geometric mean (GM) IL values for all 10 studied respirators were not significantly different among the three randomly selected 35-member panels. Passing rate was not significantly different among the three panels for all respirators combined or by each model. This was true for all IL pass/fail levels of 1%, 2%, and 5%. Using 26 or more subjects to pass the IL test, all three panels had consistent passing/failing results for pass/fail levels of 1% and 5%. Some disagreement was observed for the 2% pass/fail level. Inter-panel variability exists, but it is small relative to the other sources of variation in fit testing data. The concern about inter-panel variability and other types of variability can be alleviated by properly selecting: pass/fail level (IL 1-5%); panel size (e.g., 25 or 35); and minimum number of subjects
The genetic basis of the fitness costs of antimicrobial resistance: a meta-analysis approach
Vogwill, Tom; MacLean, R Craig
2015-01-01
The evolution of antibiotic resistance carries a fitness cost, expressed in terms of reduced competitive ability in the absence of antibiotics. This cost plays a key role in the dynamics of resistance by generating selection against resistance when bacteria encounter an antibiotic-free environment. Previous work has shown that the cost of resistance is highly variable, but the underlying causes remain poorly understood. Here, we use a meta-analysis of the published resistance literature to determine how the genetic basis of resistance influences its cost. We find that on average chromosomal resistance mutations carry a larger cost than acquiring resistance via a plasmid. This may explain why resistance often evolves by plasmid acquisition. Second, we find that the cost of plasmid acquisition increases with the breadth of its resistance range. This suggests a potentially important limit on the evolution of extensive multidrug resistance via plasmids. We also find that epistasis can significantly alter the cost of mutational resistance. Overall, our study shows that the cost of antimicrobial resistance can be partially explained by its genetic basis. It also highlights both the danger associated with plasmidborne resistance and the need to understand why resistance plasmids carry a relatively low cost. PMID:25861386
Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach
Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.
2014-01-21
The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq{sub 3} based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.
NASA Astrophysics Data System (ADS)
Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray
2007-01-01
Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic "two-state" experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin
2004-01-01
Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.
2016-10-01
Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.
Ecosystems Biology Approaches To Determine Key Fitness Traits of Soil Microorganisms
NASA Astrophysics Data System (ADS)
Brodie, E.; Zhalnina, K.; Karaoz, U.; Cho, H.; Nuccio, E. E.; Shi, S.; Lipton, M. S.; Zhou, J.; Pett-Ridge, J.; Northen, T.; Firestone, M.
2014-12-01
The application of theoretical approaches such as trait-based modeling represent powerful tools to explain and perhaps predict complex patterns in microbial distribution and function across environmental gradients in space and time. These models are mostly deterministic and where available are built upon a detailed understanding of microbial physiology and response to environmental factors. However as most soil microorganisms have not been cultivated, for the majority our understanding is limited to insights from environmental 'omic information. Information gleaned from 'omic studies of complex systems should be regarded as providing hypotheses, and these hypotheses should be tested under controlled laboratory conditions if they are to be propagated into deterministic models. In a semi-arid Mediterranean grassland system we are attempting to dissect microbial communities into functional guilds with defined physiological traits and are using a range of 'omics approaches to characterize their metabolic potential and niche preference. Initially, two physiologically relevant time points (peak plant activity and prior to wet-up) were sampled and metagenomes sequenced deeply (600-900 Gbp). Following assembly, differential coverage and nucleotide frequency binning were carried out to yield draft genomes. In addition, using a range of cultivation media we have isolated a broad range of bacteria representing abundant bacterial genotypes and with genome sequences of almost 40 isolates are testing genomic predictions regarding growth rate, temperature and substrate utilization in vitro. This presentation will discuss the opportunities and challenges in parameterizing microbial functional guilds from environmental 'omic information for use in trait-based models.
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation. PMID:27391255
Enlightening Volumes: Curve Fitting to Approximate Volumes
ERIC Educational Resources Information Center
Buhl, David; O' Neal, Judy
2008-01-01
The current mantra in education is "technology, technology, technology." Many teachers and prospective teachers become frustrated with their lack of knowledge regarding the "appropriate" use of technology in the classroom. Prospective teachers need training in their education to understand how technology can be used "appropriately" in the…
Physics from the News: Curve Fitting.
ERIC Educational Resources Information Center
Bartlett, Albert A.
1994-01-01
Attempts to determine, using estimations and current power-law relationships, whether a newspaper report concerning the statement that a single 40,000-kg truck does as much damage to the highway as 9,600 cars is true. Provides a mathematical and graphical possible solution. (MVL)
NASA Astrophysics Data System (ADS)
Bailly-Comte, V.; Pistre, S.
2011-12-01
Strategies for groundwater protection mostly use vulnerability maps to contamination; therefore, a lot of methods have been developed since the 90's with a particular attention to operational techniques. These easy-to-use methods are based on the superposition of relative rating systems applied to critical hydrogeological factors; their major drawback is the subjectivity of the determination of the rating scale and the weighting coefficients. Thus, in addition to vulnerability mapping, empirical results given by tracer tests are often needed to better assess groundwater vulnerability to accidental contamination in complex hydrosystems such as karst aquifers. This means that a lot of data about tracer breakthrough curves (BTC) in karst area are now available for hydrologists. In this context, we propose a physical approach to spatially distributed simulation of tracer BTC based on macrodispersive transport in 1D. A new interpretation of tracer tests performed in various media is shown as a validation of our theoretical development. The vulnerability map is then given by the properties of the simulated tracer BTC (modal time, mean residence time, duration over a given concentration threshold etc.). In this way, our method expresses the vulnerability with units, which makes it possible the comparison from one system to another. In addition, previous or new tracer tests can be used as a validation of the map for the same hydrological conditions. Even if this methodology is not limited to karsts hydrosystems, this seems particularly suitable for these complex environments for which understanding the origin of accidental contamination is crucial.
Gregg, Evan O.; Minet, Emmanuel
2013-01-01
There are established guidelines for bioanalytical assay validation and qualification of biomarkers. In this review, they were applied to a panel of urinary biomarkers of tobacco smoke exposure as part of a “fit for purpose” approach to the assessment of smoke constituents exposure in groups of tobacco product smokers. Clinical studies have allowed the identification of a group of tobacco exposure biomarkers demonstrating a good doseresponse relationship whilst others such as dihydroxybutyl mercapturic acid and 2-carboxy-1-methylethylmercapturic acid – did not reproducibly discriminate smokers and non-smokers. Furthermore, there are currently no agreed common reference standards to measure absolute concentrations and few inter-laboratory trials have been performed to establish consensus values for interim standards. Thus, we also discuss in this review additional requirements for the generation of robust data on urinary biomarkers, including toxicant metabolism and disposition, method validation and qualification for use in tobacco products comparison studies. PMID:23902266
NASA Astrophysics Data System (ADS)
Nigmatullin, R.; Rakhmatullin, R.
2014-12-01
yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].
2014-01-01
Background The prevalence of obesity increased while certain measures of physical fitness deteriorated in preschool children in China over the past decade. This study tested the effectiveness of a multifaceted intervention that integrated childcare center, families, and community to promote healthy growth and physical fitness in preschool Chinese children. Methods This 12-month study was conducted using a quasi-experimental pretest/posttest design with comparison group. The participants were 357 children (mean age = 4.5 year) enrolled in three grade levels in two childcare centers in Beijing, China. The intervention included: 1) childcare center intervention (physical activity policy changes, teacher training, physical education curriculum and food services training), 2) family intervention (parent education, internet website for support, and family events), and 3) community intervention (playground renovation and community health promotion events). The study outcome measures included body composition (percent body fat, fat mass, and muscle mass), Body Mass Index (BMI) and BMI z-score and physical fitness scores in 20-meter agility run (20M-AR), broad jump for distance (BJ), timed 10-jumps, tennis ball throwing (TBT), sit and reach (SR), balance beam walk (BBW), 20-meter crawl (20M-C)), 30-meter sprint (30M-S)) from a norm referenced test. Measures of process evaluation included monitoring of children’s physical activity (activity time and intensity) and food preparation records, and fidelity of intervention protocol implementation. Results Children in the intervention center significantly lowered their body fat percent (−1.2%, p < 0.0001), fat mass (−0.55 kg, p <0.0001), and body weight (0.36 kg, p <0.02) and increased muscle mass (0.48 kg, p <0.0001), compared to children in the control center. They also improved all measures of physical fitness except timed 10-jumps (20M-AR: −0.74 seconds, p < 0.0001; BJ: 8.09 cm, p < 0.0001; TBT: 0
ERIC Educational Resources Information Center
Williams, Kevin M.; Zumbo, Bruno D.
2003-01-01
Developed an item characteristic curve estimation of signal detection theory based personality data. Results for 266 college students taking the Overclaiming Questionnaire (D. Paulhus and N. Bruce, 1990) suggest that this method is a reasonable approach to describing item functioning and that there are advantages to this method over traditional…
NASA Technical Reports Server (NTRS)
Hindson, W. S.; Hardy, G. H.; Innis, R. C.
1981-01-01
Flight tests were carried out to assess the feasibility of piloted steep curved, and decelerating approach profiles in powered lift STOL aircraft. Several STOL control concepts representative of a variety of aircraft were evaluated in conjunction with suitably designed flight directions. The tests were carried out in a real navigation environment, employed special electronic cockpit displays, and included the development of the performance achieved and the control utilization involved in flying 180 deg turning, descending, and decelerating approach profiles to landing. The results suggest that such moderately complex piloted instrument approaches may indeed be feasible from a pilot acceptance point of view, given an acceptable navigation environment. Systems with the capability of those used in this experiment can provide the potential of achieving instrument operations on curved, descending, and decelerating landing approaches to weather minima corresponding to CTOL Category 2 criteria, while also providing a means of realizing more efficient operations during visual flight conditions.
Iozzi, Fabrizio; Trusiano, Francesco; Chinazzi, Matteo; Billari, Francesco C.; Zagheni, Emilio; Merler, Stefano; Ajelli, Marco; Del Fava, Emanuele; Manfredi, Piero
2010-01-01
Knowledge of social contact patterns still represents the most critical step for understanding the spread of directly transmitted infections. Data on social contact patterns are, however, expensive to obtain. A major issue is then whether the simulation of synthetic societies might be helpful to reliably reconstruct such data. In this paper, we compute a variety of synthetic age-specific contact matrices through simulation of a simple individual-based model (IBM). The model is informed by Italian Time Use data and routine socio-demographic data (e.g., school and workplace attendance, household structure, etc.). The model is named “Little Italy” because each artificial agent is a clone of a real person. In other words, each agent's daily diary is the one observed in a corresponding real individual sampled in the Italian Time Use Survey. We also generated contact matrices from the socio-demographic model underlying the Italian IBM for pandemic prediction. These synthetic matrices are then validated against recently collected Italian serological data for Varicella (VZV) and ParvoVirus (B19). Their performance in fitting sero-profiles are compared with other matrices available for Italy, such as the Polymod matrix. Synthetic matrices show the same qualitative features of the ones estimated from sample surveys: for example, strong assortativeness and the presence of super- and sub-diagonal stripes related to contacts between parents and children. Once validated against serological data, Little Italy matrices fit worse than the Polymod one for VZV, but better than concurrent matrices for B19. This is the first occasion where synthetic contact matrices are systematically compared with real ones, and validated against epidemiological data. The results suggest that simple, carefully designed, synthetic matrices can provide a fruitful complementary approach to questionnaire-based matrices. The paper also supports the idea that, depending on the transmissibility level of
NASA Astrophysics Data System (ADS)
Wassmann, A.; Borsdorff, T.; aan de Brugh, J. M. J.; Hasekamp, O. P.; Aben, I.; Landgraf, J.
2015-10-01
We present a sensitivity study of the direct fitting approach to retrieve total ozone columns from the clear sky Global Ozone Monitoring Experiment 2/MetOp-A (GOME-2/MetOp-A) measurements between 325 and 335 nm in the period 2007-2010. The direct fitting of the measurement is based on adjusting the scaling of a reference ozone profile and requires accurate simulation of GOME-2 radiances. In this context, we study the effect of three aspects that introduce forward model errors if not addressed appropriately: (1) the use of a clear sky model atmosphere in the radiative transfer demanding cloud filtering, (2) different approximations of Earth's sphericity to address the influence of the solar zenith angle, and (3) the need of polarization in radiative transfer modeling. We conclude that cloud filtering using the operational GOME-2 FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A band) cloud product, which is part of level 1B data, and the use of pseudo-spherical scalar radiative transfer is fully sufficient for the purpose of this retrieval. A validation with ground-based measurements at 36 stations confirms this showing a global mean bias of -0.1 % with a standard deviation (SD) of 2.7 %. The regularization effect inherent to the profile scaling approach is thoroughly characterized by the total column averaging kernel for each individual retrieval. It characterizes the effect of the particular choice of the ozone profile to be scaled by the inversion and is part of the retrieval product. Two different interpretations of the data product are possible: first, regarding the retrieval product as an estimate of the true column, a direct comparison of the retrieved column with total ozone columns from ground-based measurements can be done. This requires accurate a priori knowledge of the reference ozone profile and the column averaging kernel is not needed. Alternatively, the retrieval product can be interpreted as an effective column defined by the total column
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
2010-01-01
Background Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more
Fraser, H. S.; Naimi, S.; Long, W. J.
2000-01-01
INTRODUCTION: Evaluation of computer programs which generate multiple diagnoses can be hampered by a lack of effective, well recognized performance metrics. We have developed a method to calculate mean sensitivity and specificity for multiple diagnoses and generate ROC curves. METHODS: Data came from a clinical evaluation of the Heart Disease Program (HDP). Sensitivity, specificity, positive and negative predictive value (PPV, NPV) were calculated for each diagnosis type in the study. A weighted mean of overall sensitivity and specificity was derived and used to create an ROC curve. Alternative metrics Comprehensiveness and Relevance were calculated for each case and compared to the other measures. RESULTS: Weighted mean sensitivity closely matched Comprehensiveness and mean PPV matched Relevance. Plotting the Physician's sensitivity and specificity on the ROC curve showed that their discrimination was similar to the HDP but sensitivity was significantly lower. CONCLUSIONS: These metrics give a clear picture of a program's diagnostic performance and allow straightforward comparison between different programs and different studies. PMID:11079884
ERIC Educational Resources Information Center
Wimmers, Paul F.; Lee, Ming
2015-01-01
To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze…
NASA Technical Reports Server (NTRS)
Allard, Robert; Calve, Andrew; Pastreck, Edwin; Padden, Edward
1992-01-01
Tool for use in electrical-discharge machining (EDM) guides EDM electrode in making curved holes. Guide rod fits in slot in arm, which moves through arc. Motion drives electrode into workpiece along desired curved path. Electrode burns into workpiece while arm rotates on spindle. Discharge cuts hole of same radius of curvature.
Probing exoplanet clouds with optical phase curves.
Muñoz, Antonio García; Isaak, Kate G
2015-11-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5.
Probing exoplanet clouds with optical phase curves.
Muñoz, Antonio García; Isaak, Kate G
2015-11-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5. PMID:26489652
Lien, Laura L; Steggell, Carmen D; Iwarsson, Susanne
2015-09-23
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants' perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well.
Lien, Laura L.; Steggell, Carmen D.; Iwarsson, Susanne
2015-01-01
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants’ perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well. PMID:26404352
NASA Astrophysics Data System (ADS)
Cotton, W. D.
Fringe Fitting Theory; Correlator Model Delay Errors; Fringe Fitting Techniques; Baseline; Baseline with Closure Constraints; Global; Solution Interval; Calibration Sources; Source Structure; Phase Referencing; Multi-band Data; Phase-Cals; Multi- vs. Single-band Delay; Sidebands; Filtering; Establishing a Common Reference Antenna; Smoothing and Interpolating Solutions; Bandwidth Synthesis; Weights; Polarization; Fringe Fitting Practice; Phase Slopes in Time and Frequency; Phase-Cals; Sidebands; Delay and Rate Fits; Signal-to-Noise Ratios; Delay and Rate Windows; Details of Global Fringe Fitting; Multi- and Single-band Delays; Phase-Cal Errors; Calibrator Sources; Solution Interval; Weights; Source Model; Suggested Procedure; Bandwidth Synthesis
Forbes, Thomas P; Najarro, Marcela
2016-07-21
The discriminative potential of an ion mobility spectrometer (IMS) for trace detection of illicit narcotics relative to environmental background was investigated with a receiver operating characteristic (ROC) curve framework. The IMS response of cocaine, heroin, methamphetamine, 3,4-methylenedioxymethamphetamine (MDMA), and Δ(9)-tetrahydro-cannabinol (THC) was evaluated against environmental background levels derived from the screening of incoming delivery vehicles at a federal facility. Over 20 000 samples were collected over a multiyear period under two distinct sets of instrument operating conditions, a baseline mode and an increased desorption/drift tube temperature and sampling time mode. ROC curves provided a quantifiable representation of the interplay between sensitivity (true positive rate, TPR) and specificity (1 - false positive rate, FPR). A TPR of 90% and minimized FPR were targeted as the detection limits of IMS for the selected narcotics. MDMA, THC, and cocaine demonstrated single nanogram sensitivity at 90% TPR and <10% FPR, with improvements to both MDMA and cocaine in the elevated temperature/increased sampling mode. Detection limits in the tens of nanograms with poor specificity (FPR ≈ 20%) were observed for methamphetamine and heroin under baseline conditions. However, elevating the temperature reduced the background in the methamphetamine window, drastically improving its response (90% TPR and 3.8% FPR at 1 ng). On the contrary, the altered mode conditions increased the level of background for THC and heroin, partially offsetting observed enhancements to desorption. The presented framework demonstrated the significant effect environmental background distributions have on sensitivity and specificity.
Forbes, Thomas P; Najarro, Marcela
2016-07-21
The discriminative potential of an ion mobility spectrometer (IMS) for trace detection of illicit narcotics relative to environmental background was investigated with a receiver operating characteristic (ROC) curve framework. The IMS response of cocaine, heroin, methamphetamine, 3,4-methylenedioxymethamphetamine (MDMA), and Δ(9)-tetrahydro-cannabinol (THC) was evaluated against environmental background levels derived from the screening of incoming delivery vehicles at a federal facility. Over 20 000 samples were collected over a multiyear period under two distinct sets of instrument operating conditions, a baseline mode and an increased desorption/drift tube temperature and sampling time mode. ROC curves provided a quantifiable representation of the interplay between sensitivity (true positive rate, TPR) and specificity (1 - false positive rate, FPR). A TPR of 90% and minimized FPR were targeted as the detection limits of IMS for the selected narcotics. MDMA, THC, and cocaine demonstrated single nanogram sensitivity at 90% TPR and <10% FPR, with improvements to both MDMA and cocaine in the elevated temperature/increased sampling mode. Detection limits in the tens of nanograms with poor specificity (FPR ≈ 20%) were observed for methamphetamine and heroin under baseline conditions. However, elevating the temperature reduced the background in the methamphetamine window, drastically improving its response (90% TPR and 3.8% FPR at 1 ng). On the contrary, the altered mode conditions increased the level of background for THC and heroin, partially offsetting observed enhancements to desorption. The presented framework demonstrated the significant effect environmental background distributions have on sensitivity and specificity. PMID:27206280
Wardenaar, Klaas J; Wanders, Rob B K; Roest, Annelieke M; Meijer, Rob R; De Jonge, Peter
2015-06-01
Observed associations between depression following myocardial infarction (MI) and adverse cardiac outcomes could be overestimated due to patients' tendency to over report somatic depressive symptoms. This study was aimed to investigate this issue with modern psychometrics, using item response theory (IRT) and person-fit statistics to investigate if the Beck Depression Inventory (BDI) measures depression or something else among MI-patients. An IRT-model was fit to BDI-data of 1135 MI patients. Patients' adherence to this IRT-model was investigated with person-fit statistics. Subgroups of "atypical" (low person-fit) and "prototypical" (high person-fit) responders were identified and compared in terms of item-response patterns, psychiatric diagnoses, socio-demographics and somatic factors. In the IRT model, somatic items had lower thresholds compared to depressive mood/cognition items. Empirically identified "atypical" responders (n = 113) had more depressive mood/cognitions, scored lower on somatic items and more often had a Comprehensive International Diagnostic Interview (CIDI) depressive diagnosis than "prototypical" responders (n = 147). Additionally, "atypical" responders were younger and more likely to smoke. In conclusion, the BDI measures somatic symptoms in most MI patients, but measures depression in a subgroup of patients with atypical response patterns. The presented approach to account for interpersonal differences in item responding could help improve the validity of depression assessments in somatic patients. PMID:25994207
Probing exoplanet clouds with optical phase curves
Muñoz, Antonio García; Isaak, Kate G.
2015-01-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5. PMID:26489652
ERIC Educational Resources Information Center
Phelps, Joshua; Smith, Amanda; Parker, Stephany; Hermann, Janice
2016-01-01
Oklahoma Cooperative Extension Service provided elementary school students with a program that included a noncompetitive physical activity component: circuit training that combined cardiovascular, strength, and flexibility activities without requiring high skill levels. The intent was to improve fitness without focusing on body mass index as an…
A novel approach to fit testing the N95 respirator in real time in a clinical setting.
Or, Peggy; Chung, Joanne; Wong, Thomas
2016-02-01
The instant measurements provided by the Portacount fit-test instrument have been used as the gold standard in predicting the protection of an N95 respirator in a laboratory environment. The conventional Portacount fit-test method, however, cannot deliver real-time measurements of face-seal leakage when the N95 respirator is in use in clinical settings. This research was divided into two stages. Stage 1 involved developing and validating a new quantitative fit-test method called the Personal Respiratory Sampling Test (PRST). In Stage 2, PRST was evaluated in use during nursing activities in clinical settings. Eighty-four participants were divided randomly into four groups and were tested while performing bedside nursing procedures. In Stage 1, a new PRST method was successfully devised and validated. Results of Stage 2 showed that the new PRST method could detect different concentrations and different particle sizes inside the respirator while the wearer performed different nursing activities. This new fit-test method, PRST, can detect face seal leakage of an N95 respirator being worn while the wearer performs clinical activities. Thus, PRST can help ensure that the N95 respirator actually fulfils its function of protecting health-care workers from airborne pathogens. PMID:24828795
Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip
2016-10-01
This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age. PMID:27461262
NASA Astrophysics Data System (ADS)
Dobberschütz, Sören; Böhm, Michael
2010-02-01
The behaviour of a free fluid flow above a porous medium, both separated by a curved interface, is investigated. By carrying out a coordinate transformation, we obtain the description of the flow in a domain with a straight interface. Using periodic homogenisation, the effective behaviour of the transformed partial differential equations in the porous part is given by a Darcy law with non-constant permeability matrix. Then the fluid behaviour at the porous-liquid interface is obtained with the help of generalised boundary-layer functions: Whereas the velocity in normal direction is continuous across the interface, a jump appears in tangential direction. Its magnitude seems to be related to the slope of the interface. Therefore the results indicate a generalised law of Beavers and Joseph.
Testing MONDian dark matter with galactic rotation curves
Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack E-mail: farrah@vt.edu E-mail: takeuchi@vt.edu E-mail: yjng@physics.unc.edu
2014-09-20
MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.
Testing MONDian Dark Matter with Galactic Rotation Curves
NASA Astrophysics Data System (ADS)
Farrah, Duncan; Edmonds, Doug; Ho, Chiu Man; Minic, Djordje; Ng, Jack; Takeuchi, Tatsu
2015-01-01
MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z=0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
Ema, T
1992-01-01
In general, most vehicles can be modelled by a multi-variable system which has interactive variables. It can be clearly shown that there is an interactive response in an aircraft's velocity and altitude obtained by stick control and/or throttle control. In particular, if the flight conditions fall to backside of drag curve in the flight of an STOL aircraft at approach and landing then the ratio of drag variation to velocity change has a negative value (delta D/delta u less than 0) and the system of motion presents a non-minimum phase. Therefore, the interaction between velocity and altitude response becomes so complicated that it affects to pilot's control actions and it may be difficult to control the STOL aircraft at approach and landing. In this paper, experimental results of a pilot's ability to control the STOL aircraft are presented for a multi-variable manual control system using a fixed ground base simulator and the pilot's control ability is discussed for the flight of an STOL aircraft at backside of drag curve at approach and landing.
NASA Astrophysics Data System (ADS)
Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.
2009-04-01
The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.
NASA Astrophysics Data System (ADS)
Ravenna, M.; Lebedev, S.
2015-12-01
A reliable approach to quantify non-uniqueness and to provide error estimates in nonlinear inversion problems, as the surface-wave dispersion curves inversion for the seismic velocity structure of the earth, is Monte Carlo sampling in a Bayesian statistical framefork. We develop a Markov Chain Monte Carlo method for joint inversion of Rayleigh- and Love-wave dispersion curves that is able to yield robust radially and azimuthally anisotropic shear velocity profiles, with resolution to depths down to the transition zone.The inversion technique doesn't involve any linearization procedure or strong a priori bounds around a reference model. In a fixed dimensional Bayesian formulation, we choose to set the number of parameters relatively high, with a more dense parametrization in the uppermost mantle in order to have a good resolution of the Litosphere-Astenosphere Boundary region. We apply the MCMC algorithm to the inversion of surface-wave phase velocities accurately determined in broad period ranges in a few test regions. In the Baikal-Mongolia region we invert Rayleigh- and Love- wave dispersion curves for radially anisotropic structure (Vsv,Vsh) of the crust and upper mantle. In the Tuscany region, where we have phase velocity data with good azimuthal coverage, a different implementation of the algorithm is applied that is able to resolve azimuthal anisotropy; the Rayleigh wave dispersion curves measured at different azimuths have been inverted for the Vsv structure and the depth distribution of the 2-ψ azimuthal anisotropy of the region, with good resolution down to asthenospheric and transition zone depths.
Muñoz-Mas, Rafael; Martínez-Capel, Francisco; Schneider, Matthias; Mouton, Ans M
2012-12-01
The implementation of the Water Framework Directive implies the determination of an environmental flow (E-flow) in each running water body. In Spain, many of the minimum flow assessments were determined with the physical habitat simulation system based on univariate habitat suitability curves. Multivariate habitat suitability models, widely applied in habitat assessment, are potentially more accurate than univariate suitability models. This article analyses the microhabitat selection by medium-sized (10-20 cm) brown trout (Salmo trutta fario) in three streams of the Jucar River Basin District (eastern Iberian Peninsula). The data were collected with an equal effort sampling approach. Univariate habitat suitability curves were built with a data-driven process for depth, mean velocity and substrate classes; three types of data-driven fuzzy models were generated with the FISH software: two models of presence-absence and a model of abundance. FISH applies a hill-climbing algorithm to optimize the fuzzy rules. A hydraulic model was calibrated with the tool River-2D in a segment of the Cabriel River (Jucar River Basin). The fuzzy-logic models and three methods to produce a suitability index from the three univariate curves were applied to evaluate the river habitat in the tool CASiMiR©. The comparison of results was based on the spatial arrangement of habitat suitability and the curves of weighted usable area versus discharge. The differences were relevant in different aspects, e.g. in the estimated minimum environmental flow according to the Spanish legal norm for hydrological planning. This work demonstrates the impact of the model's selection on the habitat suitability modelling and the assessment of environmental flows, based on an objective data-driven procedure; the conclusions are important for the water management in the Jucar River Basin and other river systems in Europe, where the environmental flows are a keystone for the achievement of the goals established
Laffosse, Jean-Michel; Chiron, Philippe; Accadbled, Franck; Molinier, François; Tricoire, Jean-Louis; Puget, Jean
2006-12-01
We analysed the learning curve of an anterolateral minimally invasive (ALMI) approach for primary total hip replacement (THR). The first 42 THR's with large-diameter heads implanted through this approach (group 1) were compared to a cohort of 58 THR's with a 28-mm head performed through a standard-incision posterior approach (group 2). No selection was made and the groups were comparable. Implant positioning as well as early clinical results were satisfactory and were comparable in the two groups. In group 1, the rate of intraoperative complications was significantly higher (greater trochanter fracture in 4 cases, cortical perforation in 3 cases, calcar fracture in one case, nerve palsy in one case, secondary tilting of the metal back in 2 cases) than in group 2 (one nerve palsy and one calcar crack). At 6 months, one revision of the acetabular cup was performed in group 1 for persistent pain, whereas in group 2, we noted 3 dislocations (2 were revised) and 2 periprosthetic femoral fractures. Our study showed a high rate of intra- and perioperative complications during the learning curve for an ALMI approach. These are more likely to occur in obese or osteoporotic patients, and in those with bulky muscles or very stiff hips. Postoperative complications were rare. The early clinical results are excellent and we may expect to achieve better results with a more standardised procedure. During the initial period of the learning curve, it would be preferable to select patients with an appropriate morphology.
Carvalho, Humberto M
2015-01-01
The aim of this paper was to outline a multilevel modeling approach to fit individual angle-specific torque curves describing concentric knee extension and flexion isokinetic muscular actions in Master athletes. The potential of the analytical approach to examine between individual differences across the angle-specific torque curves was illustrated including between-individuals variation due to gender differences at a higher level. Torques in concentric muscular actions of knee extension and knee extension at 60º·s−1 were considered within a range of motion between 5º and 85º (only torques “truly” isokinetic). Multilevel time series models with autoregressive covariance structures with standard multilevel models were superior fits compared with standard multilevel models for repeated measures to fit angle-specific torque curves. Third and fourth order polynomial models were the best fits to describe angle-specific torque curves of isokinetic knee flexion and extension concentric actions, respectively. The fixed exponents allow interpretations for initial acceleration, the angle at peak torque and the decrement of torque after peak torque. Also, the multilevel models were flexible to illustrate the influence of gender differences on the shape of torque throughout the range of motion and in the shape of the curves. The presented multilevel regression models may afford a general framework to examine angle-specific moment curves by isokinetic dynamometry, and add to the understanding mechanisms of strength development, particularly the force-length relationship, both related to performance and injury prevention. PMID:26839603
Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K
1995-10-01
An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages.
Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K
1995-10-01
An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages. PMID:7560683
PRO_LIGAND: an approach to de novo molecular design. 6. Flexible fitting in the design of peptides.
Murray, C W; Clark, D E; Byrne, D G
1995-10-01
This paper describes the further development of the functionality of our in-house de novo design program, PRO_LIGAND. In particular, attention is focused on the implementation and validation of the 'direct tweak' method for the construction of conformationally flexible molecules, such as peptides, from molecular fragments. This flexible fitting method is compared to the original method based on libraries of prestored conformations for each fragment. It is shown that the directed tweak method produces results of comparable quality, with significant time savings. By removing the need to generate a set of representative conformers for any new library fragment, the flexible fitting method increases the speed and simplicity with which new fragments can be included in a fragment library and also reduces the disk space required for library storage. A further improvement to the molecular construction process within PRO_LIGAND is the inclusion of a constrained minimisation procedure which relaxes fragments onto the design model and can be used to reject highly strained structures during the structure generation phase. This relaxation is shown to be very useful in simple test cases, but restricts diversity for more realistic examples. The advantages and disadvantages of these additions to the PRO_LIGAND methodology are illustrated by three examples: similar design to an alpha helix region of dihydrofolate reductase, complementary design to the active site of HIV-1 protease and similar design to an epitope region of lysozyme. PMID:8594156
NASA Astrophysics Data System (ADS)
Tarana, Michal; Čurík, Roman
2016-05-01
We introduce a computational method developed for study of long-range molecular Rydberg states of such systems that can be approximated by two electrons in a model potential of the atomic cores. The method is based on a two-electron R-matrix approach inside a sphere centered on one of the atoms. The wave function is then connected to a Coulomb region outside the sphere via a multichannel version of the Coulomb Green's function. This approach is applied to a study of Rydberg states of Rb2 for internuclear separations R from 40 to 320 bohrs and energies corresponding to n from 7 to 30. We report bound states associated with the low-lying 3Po resonance and with the virtual state of the rubidium atom that turn into ion-pair-like bound states in the Coulomb potential of the atomic Rydberg core. The results are compared with previous calculations based on single-electron models employing a zero-range contact-potential and short-range modele potential. Czech Science Foundation (Project No. P208/14-15989P).
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-01-01
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as “reliable” or “unreliable” based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance (1H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named “cluster-aided MCR-ALS,” will facilitate the attainment of more reliable results in the metabolomics datasets. PMID:26531245
Triballeau, Nicolas; Acher, Francine; Brabet, Isabelle; Pin, Jean-Philippe; Bertrand, Hugues-Olivier
2005-04-01
The "receiver operating characteristic" (ROC) curve method is a well-recognized metric used as an objective way to evaluate the ability of a given test to discriminate between two populations. This facilitates decision-making in a plethora of fields in which a wrong judgment may have serious consequences including clinical diagnosis, public safety, travel security, and economic strategies. When virtual screening is used to speed-up the drug discovery process in pharmaceutical research, taking the right decision upon selecting or discarding a molecule prior to in vitro evaluation is of paramount importance. Characterizing both the ability of a virtual screening workflow to select active molecules and the ability to discard inactive ones, the ROC curve approach is well suited for this critical decision gate. As a case study, the first virtual screening workflow focused on metabotropic glutamate receptor subtype 4 (mGlu4R) agonists is reported here. Six compounds out of 38 selected and tested in vitro were shown to have agonist activity on this target of therapeutic interest.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2014-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive due to the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using Mass Differential Tags for Relative and Absolute Quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N,N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective due to their synthetic simplicity, and have increased throughput compared to previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error) while the second enables standard curve creation and analyte quantification in one run (<8% error). PMID:25377360
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+).
Qiu, Xiao-han; Zhang, Yu-jun; Yin, Gao-fang; Shi, Chao-yi; Yu, Xiao-ya; Zhao, Nan-jing; Liu, Wen-qing
2015-08-01
The fast chlorophyll fluorescence induction curve contains rich information of photosynthesis. It can reflect various information of vegetation, such as, the survival status, the pathological condition and the physiology trends under the stress state. Through the acquisition of algae fluorescence and induced optical signal, the fast phase of chlorophyll fluorescence kinetics curve was fitted. Based on least square fitting method, we introduced adaptive minimum error approaching method for fast multivariate nonlinear regression fitting toward chlorophyll fluorescence kinetics curve. We realized Fo (fixedfluorescent), Fm (maximum fluorescence yield), σPSII (PSII functional absorption cross section) details parameters inversion and the photosynthetic parameters inversion of Chlorella pyrenoidosa. And we also studied physiological variation of Chlorella pyrenoidosa under the stress of Cu(2+). PMID:26672292
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liang, Chunlei
2015-08-01
This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.
NASA Technical Reports Server (NTRS)
Kiser, J. Douglas; Singh, Mrityunjay; Lei, Jin-Fen; Martin, Lisa C.
1999-01-01
A novel attachment approach for positioning sensor lead wires on silicon carbide-based monolithic ceramic and fiber reinforced ceramic matrix composite (FRCMC) components has been developed. This approach is based on an affordable, robust ceramic joining technology, named ARCJoinT, which was developed for the joining of silicon carbide-based ceramic and fiber reinforced composites. The ARCJoinT technique has previously been shown to produce joints with tailorable thickness and good high temperature strength. In this study, silicon carbide-based ceramic and FRCMC attachments of different shapes and sizes were joined onto silicon carbide fiber reinforced silicon carbide matrix (SiC/ SiC) composites having flat and curved surfaces. Based on results obtained in previous joining studies. the joined attachments should maintain their mechanical strength and integrity at temperatures up to 1350 C in air. Therefore they can be used to position and secure sensor lead wires on SiC/SiC components that are being tested in programs that are focused on developing FRCMCs for a number of demanding high temperature applications in aerospace and ground-based systems. This approach, which is suitable for installing attachments on large and complex shaped monolithic ceramic and composite components, should enhance the durability of minimally intrusive high temperature sensor systems. The technology could also be used to reinstall attachments on ceramic components that were damaged in service.
The Circle of Trust[R] Approach and a Counselor Training Program: A Hand in Glove Fit
ERIC Educational Resources Information Center
Goodell, Judith A.
2012-01-01
The Circle of Trust[R] approach (www.couragerenewal.org) is dedicated to principles and practices that support exploration of the inner landscape of one's life. Participants share time in a trustworthy environment, connect with inner wisdom, and seek harmony in their personal and professional selves. In this chapter, the author describes her…
Wu, Xin-bo; Fan, Guo-xin; Gu, Xin; Shen, Tu-gang; Guan, Xiao-fei; Hu, An-nan; Zhang, Hai-long; He, Shi-sheng
2016-01-01
Objectives: This study aimed to compare the learning curves of percutaneous endoscopic lumbar discectomy (PELD) in a transforaminal approach at the L4/5 and L5/S1 levels. Methods: We retrospectively reviewed the first 60 cases at the L4/5 level (Group I) and the first 60 cases at the L5/S1 level (Group II) of PELD performed by one spine surgeon. The patients were divided into subgroups A, B, and C (Group I: A cases 1–20, B cases 21–40, C cases 41–60; Group II: A cases 1–20, B cases 21–40, C cases 41–60). Operation time was thoroughly analyzed. Results: Compared with the L4/5 level, the learning curve of transforaminal PELD at the L5/S1 level was flatter. The mean operation times of Groups IA, IB, and IC were (88.75±17.02), (67.75±6.16), and (64.85±7.82) min, respectively. There was a significant difference between Groups A and B (P<0.05), but no significant difference between Groups B and C (P=0.20). The mean operation times of Groups IIA, IIB, and IIC were (117.25±13.62), (109.50±11.20), and (92.15±11.94) min, respectively. There was no significant difference between Groups A and B (P=0.06), but there was a significant difference between Groups B and C (P<0.05). There were 6 cases of postoperative dysesthesia (POD) in Group I and 2 cases in Group IIA (P=0.27). There were 2 cases of residual disc in Group I, and 4 cases in Group II (P=0.67). There were 3 cases of recurrence in Group I, and 2 cases in Group II (P>0.05). Conclusions: Compared with the L5/S1 level, the learning curve of PELD in a transforaminal approach at the L4/5 level was steeper, suggesting that the L4/5 level might be easier to master after short-term professional training. PMID:27381732
Langevin Equation on Fractal Curves
NASA Astrophysics Data System (ADS)
Satin, Seema; Gangal, A. D.
2016-07-01
We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.
Christensen, S. W.; Goodyear, C. P.; Kirk, B. L.
1982-03-01
This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks.
Duun Rohde, Palle; Krag, Kristian; Loeschcke, Volker; Overgaard, Johannes; Sørensen, Peter; Nygaard Kristensen, Torsten
2016-01-01
The ability of natural populations to withstand environmental stresses relies partly on their adaptive ability. In this study, we used a subset of the Drosophila Genetic Reference Panel, a population of inbred, genome-sequenced lines derived from a natural population of Drosophila melanogaster, to investigate whether this population harbors genetic variation for a set of stress resistance and life history traits. Using a genomic approach, we found substantial genetic variation for metabolic rate, heat stress resistance, expression of a major heat shock protein, and egg-to-adult viability investigated at a benign and a higher stressful temperature. This suggests that these traits will be able to evolve. In addition, we outline an approach to conduct pathway associations based on genomic linear models, which has potential to identify adaptive genes and pathways, and therefore can be a valuable tool in conservation genomics. PMID:27274984
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Bank, Claudia; Hietpas, Ryan T.; Wong, Alex; Bolon, Daniel N.; Jensen, Jeffrey D.
2014-01-01
The role of adaptation in the evolutionary process has been contentious for decades. At the heart of the century-old debate between neutralists and selectionists lies the distribution of fitness effects (DFE)—that is, the selective effect of all mutations. Attempts to describe the DFE have been varied, occupying theoreticians and experimentalists alike. New high-throughput techniques stand to make important contributions to empirical efforts to characterize the DFE, but the usefulness of such approaches depends on the availability of robust statistical methods for their interpretation. We here present and discuss a Bayesian MCMC approach to estimate fitness from deep sequencing data and use it to assess the DFE for the same 560 point mutations in a coding region of Hsp90 in Saccharomyces cerevisiae across six different environmental conditions. Using these estimates, we compare the differences in the DFEs resulting from mutations covering one-, two-, and three-nucleotide steps from the wild type—showing that multiple-step mutations harbor more potential for adaptation in challenging environments, but also tend to be more deleterious in the standard environment. All observations are discussed in the light of expectations arising from Fisher’s geometric model. PMID:24398421
Gilkey, Roderick; Kilts, Clint
2007-11-01
Recent neuroscientific research shows that the health of your brain isn't, as experts once thought, just the product of childhood experiences and genetics; it reflects your adult choices and experiences as well. Professors Gilkey and Kilts of Emory University's medical and business schools explain how you can strengthen your brain's anatomy, neural networks, and cognitive abilities, and prevent functions such as memory from deteriorating as you age. The brain's alertness is the result of what the authors call cognitive fitness -a state of optimized ability to reason, remember, learn, plan, and adapt. Certain attitudes, lifestyle choices, and exercises enhance cognitive fitness. Mental workouts are the key. Brain-imaging studies indicate that acquiring expertise in areas as diverse as playing a cello, juggling, speaking a foreign language, and driving a taxicab expands your neural systems and makes them more communicative. In other words, you can alter the physical makeup of your brain by learning new skills. The more cognitively fit you are, the better equipped you are to make decisions, solve problems, and deal with stress and change. Cognitive fitness will help you be more open to new ideas and alternative perspectives. It will give you the capacity to change your behavior and realize your goals. You can delay senescence for years and even enjoy a second career. Drawing from the rapidly expanding body of neuroscience research as well as from well-established research in psychology and other mental health fields, the authors have identified four steps you can take to become cognitively fit: understand how experience makes the brain grow, work hard at play, search for patterns, and seek novelty and innovation. Together these steps capture some of the key opportunities for maintaining an engaged, creative brain. PMID:18159786
Test fittings for dimensionally critical tubes
NASA Technical Reports Server (NTRS)
Hagler, R.
1980-01-01
Method using lightweight fitting protects tubes and tube stubs during testing and through to final welding. Fitting does not interfere with final welding or brazing like temporary test fittings, and is not heavy like machined-on integral fittings with face-seal 0-rings. Fitting approach is adaptable to many types of components, including valves, transducers, and filters.
Jack, B Kelsey; Leimona, Beria; Ferraro, Paul J
2009-04-01
To supply ecosystem services, private landholders incur costs. Knowledge of these costs is critical for the design of conservation-payment programs. Estimating these costs accurately is difficult because the minimum acceptable payment to a potential supplier is private information. We describe how an auction of payment contracts can be designed to elicit this information during the design phase of a conservation-payment program. With an estimate of the ecosystem-service supply curve from a pilot auction, conservation planners can explore the financial, ecological, and socioeconomic consequences of alternative scaled-up programs. We demonstrate the potential of our approach in Indonesia, where soil erosion on coffee farms generates downstream ecological and economic costs. Bid data from a small-scale, uniform-price auction for soil-conservation contracts allowed estimates of the costs of a scaled-up program, the gain from integrating biophysical and economic data to target contracts, and the trade-offs between poverty alleviation and supply of ecosystem services. Our study illustrates an auction-based approach to revealing private information about the costs of supplying ecosystem services. Such information can improve the design of programs devised to protect and enhance ecosystem services.
Ho, Shirley S; Lee, Edmund W J; Ng, Kaijie; Leong, Grace S H; Tham, Tiffany H M
2016-09-01
Based on the influence of presumed media influence (IPMI) model as the theoretical framework, this study examines how injunctive norms and personal norms mediate the influence of healthy lifestyle media messages on public intentions to engage in two types of healthy lifestyle behaviors-physical activity and healthy diet. Nationally representative data collected from 1,055 adults in Singapore demonstrate partial support for the key hypotheses that make up the extended IPMI model, highlighting the importance of a norms-based approach in health communication. Our results indicate that perceived media influence on others indirectly shaped public intentions to engage in healthy lifestyle behaviors through personal norms and attitude, providing partial theoretical support for the extended IPMI model. Practical implications for health communicators in designing health campaigns media messages to motivate the public to engage in healthy lifestyle are discussed. PMID:26799846
Schulz, Douglas A.
2007-10-08
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classied and used to develop classication histograms, which are representative of an individual's typical mouse use. These classication histograms can then be compared to validate identity. This classication approach is suitable for providing continuous identity validation during an entire user session.
Invasion fitness, inclusive fitness, and reproductive numbers in heterogeneous populations.
Lehmann, Laurent; Mullon, Charles; Akçay, Erol; Van Cleve, Jeremy
2016-08-01
How should fitness be measured to determine which phenotype or "strategy" is uninvadable when evolution occurs in a group-structured population subject to local demographic and environmental heterogeneity? Several fitness measures, such as basic reproductive number, lifetime dispersal success of a local lineage, or inclusive fitness have been proposed to address this question, but the relationships between them and their generality remains unclear. Here, we ascertain uninvadability (all mutant strategies always go extinct) in terms of the asymptotic per capita number of mutant copies produced by a mutant lineage arising as a single copy in a resident population ("invasion fitness"). We show that from invasion fitness uninvadability is equivalently characterized by at least three conceptually distinct fitness measures: (i) lineage fitness, giving the average individual fitness of a randomly sampled mutant lineage member; (ii) inclusive fitness, giving a reproductive value weighted average of the direct fitness costs and relatedness weighted indirect fitness benefits accruing to a randomly sampled mutant lineage member; and (iii) basic reproductive number (and variations thereof) giving lifetime success of a lineage in a single group, and which is an invasion fitness proxy. Our analysis connects approaches that have been deemed different, generalizes the exact version of inclusive fitness to class-structured populations, and provides a biological interpretation of natural selection on a mutant allele under arbitrary strength of selection.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
ERIC Educational Resources Information Center
Yates, Robert C.
This volume, a reprinting of a classic first published in 1952, presents detailed discussions of 26 curves or families of curves, and 17 analytic systems of curves. For each curve the author provides a historical note, a sketch or sketches, a description of the curve, a discussion of pertinent facts, and a bibliography. Depending upon the curve,…
Inclusive fitness in agriculture
Kiers, E. Toby; Denison, R. Ford
2014-01-01
Trade-offs between individual fitness and the collective performance of crop and below-ground symbiont communities are common in agriculture. Plant competitiveness for light and soil resources is key to individual fitness, but higher investments in stems and roots by a plant community to compete for those resources ultimately reduce crop yields. Similarly, rhizobia and mycorrhizal fungi may increase their individual fitness by diverting resources to their own reproduction, even if they could have benefited collectively by providing their shared crop host with more nitrogen and phosphorus, respectively. Past selection for inclusive fitness (benefits to others, weighted by their relatedness) is unlikely to have favoured community performance over individual fitness. The limited evidence for kin recognition in plants and microbes changes this conclusion only slightly. We therefore argue that there is still ample opportunity for human-imposed selection to improve cooperation among crop plants and their symbionts so that they use limited resources more efficiently. This evolutionarily informed approach will require a better understanding of how interactions among crops, and interactions with their symbionts, affected their inclusive fitness in the past and what that implies for current interactions. PMID:24686938
Inclusive fitness in agriculture.
Kiers, E Toby; Denison, R Ford
2014-05-19
Trade-offs between individual fitness and the collective performance of crop and below-ground symbiont communities are common in agriculture. Plant competitiveness for light and soil resources is key to individual fitness, but higher investments in stems and roots by a plant community to compete for those resources ultimately reduce crop yields. Similarly, rhizobia and mycorrhizal fungi may increase their individual fitness by diverting resources to their own reproduction, even if they could have benefited collectively by providing their shared crop host with more nitrogen and phosphorus, respectively. Past selection for inclusive fitness (benefits to others, weighted by their relatedness) is unlikely to have favoured community performance over individual fitness. The limited evidence for kin recognition in plants and microbes changes this conclusion only slightly. We therefore argue that there is still ample opportunity for human-imposed selection to improve cooperation among crop plants and their symbionts so that they use limited resources more efficiently. This evolutionarily informed approach will require a better understanding of how interactions among crops, and interactions with their symbionts, affected their inclusive fitness in the past and what that implies for current interactions.
Inclusive fitness in agriculture.
Kiers, E Toby; Denison, R Ford
2014-05-19
Trade-offs between individual fitness and the collective performance of crop and below-ground symbiont communities are common in agriculture. Plant competitiveness for light and soil resources is key to individual fitness, but higher investments in stems and roots by a plant community to compete for those resources ultimately reduce crop yields. Similarly, rhizobia and mycorrhizal fungi may increase their individual fitness by diverting resources to their own reproduction, even if they could have benefited collectively by providing their shared crop host with more nitrogen and phosphorus, respectively. Past selection for inclusive fitness (benefits to others, weighted by their relatedness) is unlikely to have favoured community performance over individual fitness. The limited evidence for kin recognition in plants and microbes changes this conclusion only slightly. We therefore argue that there is still ample opportunity for human-imposed selection to improve cooperation among crop plants and their symbionts so that they use limited resources more efficiently. This evolutionarily informed approach will require a better understanding of how interactions among crops, and interactions with their symbionts, affected their inclusive fitness in the past and what that implies for current interactions. PMID:24686938
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Deming's General Least Square Fitting
1992-02-18
DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested,more » and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.« less
Deming's General Least Square Fitting
Rinard, Phillip
1992-02-18
DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.
2014-01-01
Background Recommendations for secondary hyperparathyroidism (SHPT) consider that a “one-size-fits-all” target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes. Results 305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ “intermediate” phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (>2), suggesting a relevant classification. Conclusion In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A “one-size-fits-all” target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes. PMID:25123022
Kim, Kyong-Soo; Lin, Zhiqun; Shrotriya, Pranav; Sundararajan, Sriram; Zou, Qingze
2008-08-01
Force-distance curve measurements using atomic force microscope (AFM) has been widely used in a broad range of areas. However, currently force-curve measurements are hampered the its low speed of AFM. In this article, a novel inversion-based iterative control technique is proposed to dramatically increase the speed of force-curve measurements. Experimental results are presented to show that by using the proposed control technique, the speed of force-curve measurements can be increased by over 80 times--with no loss of spatial resolution--on a commercial AFM platform and with a standard cantilever. High-speed force curve measurements using this control technique are utilized to quantitatively study the time-dependent elastic modulus of poly(dimethylsiloxane) (PDMS). The force-curves employ a broad spectrum of push-in (load) rates, spanning two-order differences. The elastic modulus measured at low-speed compares well with the value obtained from dynamic mechanical analysis (DMA) test, and the value of the elastic modulus increases as the push-in rate increases, signifying that a faster external deformation rate transitions the viscoelastic response of PDMS from that of a rubbery material toward a glassy one. PMID:18467033
Anatomical curve identification
Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise
2015-01-01
Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943
Carr, Steven A.; Abbateillo, Susan E.; Ackermann, Bradley L.; Borchers, Christoph H.; Domon, Bruno; Deutsch, Eric W.; Grant, Russel; Hoofnagle, Andrew N.; Huttenhain, Ruth; Koomen, John M.; Liebler, Daniel; Liu, Tao; MacLean, Brendan; Mani, DR; Mansfield, Elizabeth; Neubert, Hendrik; Paulovich, Amanda G.; Reiter, Lukas; Vitek, Olga; Aebersold, Ruedi; Anderson, Leigh N.; Bethem, Robert; Blonder, Josip; Boja, Emily; Botelho, Julianne; Boyne, Michael; Bradshaw, Ralph A.; Burlingame, Alma S.; Chan, Daniel W.; Keshishian, Hasmik; Kuhn, Eric; Kingsinger, Christopher R.; Lee, Jerry S.; Lee, Sang-Won; Moritz, Robert L.; Oses-Prieto, Juan; Rifai, Nader; Ritchie, James E.; Rodriguez, Henry; Srinivas, Pothur R.; Townsend, Reid; Van Eyk , Jennifer; Whiteley, Gordon; Wiita, Arun; Weintraub, Susan
2014-01-14
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this “fit-for-purpose” approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and
Carr, Steven A.; Abbatiello, Susan E.; Ackermann, Bradley L.; Borchers, Christoph; Domon, Bruno; Deutsch, Eric W.; Grant, Russell P.; Hoofnagle, Andrew N.; Hüttenhain, Ruth; Koomen, John M.; Liebler, Daniel C.; Liu, Tao; MacLean, Brendan; Mani, DR; Mansfield, Elizabeth; Neubert, Hendrik; Paulovich, Amanda G.; Reiter, Lukas; Vitek, Olga; Aebersold, Ruedi; Anderson, Leigh; Bethem, Robert; Blonder, Josip; Boja, Emily; Botelho, Julianne; Boyne, Michael; Bradshaw, Ralph A.; Burlingame, Alma L.; Chan, Daniel; Keshishian, Hasmik; Kuhn, Eric; Kinsinger, Christopher; Lee, Jerry S.H.; Lee, Sang-Won; Moritz, Robert; Oses-Prieto, Juan; Rifai, Nader; Ritchie, James; Rodriguez, Henry; Srinivas, Pothur R.; Townsend, R. Reid; Van Eyk, Jennifer; Whiteley, Gordon; Wiita, Arun; Weintraub, Susan
2014-01-01
Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this “fit-for-purpose” approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and
Cho, Sunggoo
2016-09-01
Conics and Cartesian ovals are extremely important curves in various fields of science. In addition, aspheric curves based on conics are useful in optical design. Superconic curves, recently suggested by Greynolds, are extensions of both conics and Cartesian ovals and have been applied to optical design. However, they are not extensions of aspheric curves based on conics. In this work, we investigate another type of superconic curves. These superconic curves are extensions of not only conics and Cartesian ovals but also aspheric curves based on conics. Moreover, these are represented in explicit form, while Greynolds's superconic curves are in implicit form. PMID:27607506
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.; Ford, C. P., III
1975-01-01
The geometry of general three-dimensional bodies was generated from coordinates of points in several cross sections. Since these points may not be on smooth curves, they are divided into groups forming segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction through longitudinal curves. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines or specifying slopes at selected points. This method was used to surface fit a 70 deg slab delta wing and the HL-10 Lifting Body. The results for the delta wing were very close to the exact geometry. Although there is no exact solution for the lifting body, the surface fit generated a smooth surface with cross-sectional planes very close to prescribed coordinate points.
Magnetism in curved geometries
NASA Astrophysics Data System (ADS)
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-09-01
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii–Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. These recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
Magnetism in curved geometries
NASA Astrophysics Data System (ADS)
Streubel, Robert; Fischer, Peter; Kronast, Florian; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri; Schmidt, Oliver G.; Makarov, Denys
2016-09-01
Extending planar two-dimensional structures into the three-dimensional space has become a general trend in multiple disciplines, including electronics, photonics, plasmonics and magnetics. This approach provides means to modify conventional or to launch novel functionalities by tailoring the geometry of an object, e.g. its local curvature. In a generic electronic system, curvature results in the appearance of scalar and vector geometric potentials inducing anisotropic and chiral effects. In the specific case of magnetism, even in the simplest case of a curved anisotropic Heisenberg magnet, the curvilinear geometry manifests two exchange-driven interactions, namely effective anisotropy and antisymmetric exchange, i.e. Dzyaloshinskii-Moriya-like interaction. As a consequence, a family of novel curvature-driven effects emerges, which includes magnetochiral effects and topologically induced magnetization patterning, resulting in theoretically predicted unlimited domain wall velocities, chirality symmetry breaking and Cherenkov-like effects for magnons. The broad range of altered physical properties makes these curved architectures appealing in view of fundamental research on e.g. skyrmionic systems, magnonic crystals or exotic spin configurations. In addition to these rich physics, the application potential of three-dimensionally shaped objects is currently being explored as magnetic field sensorics for magnetofluidic applications, spin-wave filters, advanced magneto-encephalography devices for diagnosis of epilepsy or for energy-efficient racetrack memory devices. These recent developments ranging from theoretical predictions over fabrication of three-dimensionally curved magnetic thin films, hollow cylinders or wires, to their characterization using integral means as well as the development of advanced tomography approaches are in the focus of this review.
Alternative Forms of Fit in Contingency Theory.
ERIC Educational Resources Information Center
Drazin, Robert; Van de Ven, Andrew H.
1985-01-01
This paper examines the selection, interaction, and systems approaches to fit in structural contingency theory. The concepts of fit evaluated may be applied not only to structural contingency theory but to contingency theories in general. (MD)
UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE
Sanders, N. E.; Soderberg, A. M.; Betancourt, M.
2015-02-10
Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.
fits2hdf: FITS to HDFITS conversion
NASA Astrophysics Data System (ADS)
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-05-01
fits2hdf ports FITS files to Hierarchical Data Format (HDF5) files in the HDFITS format. HDFITS allows faster reading of data, higher compression ratios, and higher throughput. HDFITS formatted data can be presented transparently as an in-memory FITS equivalent by changing the import lines in Python-based FITS utilities. fits2hdf includes a utility to port MeasurementSets (MS) to HDF5 files.
Progress curve analysis of qRT-PCR reactions using the logistic growth equation.
Liu, Meile; Udhe-Stone, Claudia; Goudar, Chetan T
2011-01-01
We present an alternate approach for analyzing data from real-time reverse transcription polymerase chain reaction (qRT-PCR) experiments by fitting individual fluorescence vs. cycle number (F vs. C) curves to the logistic growth equation. The best fit parameters determined by nonlinear least squares were used to compute the second derivative of the logistic equation and the cycle threshold, C(t), was determined from the maximum value of the second derivative. This C(t) value was subsequently used to determine ΔΔC(t) and the amplification efficiency, E(n), thereby completing the analysis on a qRT-PCR data set. The robustness of the logistic approach was verified by testing ~600 F vs. C curves using both new and previously published data sets. In most cases, comparisons were made between the logistic estimates and those from the standard curve and comparative C(t) methods. Deviations between the logistic and standard curve method ranged between 3-10% for C(t) estimates, 2-10% for ΔΔC(t) estimates, and 1-11% for E(n) estimates. The correlations between C(t) estimates from the logistic and standard curve methods were very high, often >0.95. When compared with five other established methods of qRT-PCR data analysis to predict initial concentrations of two genes encompassing a total of 500 F vs. C curves, the logistic estimates were of comparable accuracy. This reliable performance of the logistic approach comes without the need to construct standard curves which can be a laborious undertaking. Also, no a priori assumptions for E(n) are necessary while some other methods assume equal E(n) values for the reference and target genes, an assumption that is not universally valid. In addition, by accurately describing the data in the plateau region of the F vs. C curve, the logistic method overcomes the limitations of the sigmoidal curve fitting method. The streamlined nature of the logistic approach makes it ideal for complete automation on a variety of computing
Hamiltonian inclusive fitness: a fitter fitness concept
Costa, James T.
2013-01-01
In 1963–1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's ‘special difficulty’—the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of ‘offspring ideas’ that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated. PMID:24132089
Properties of Rasch residual fit statistics.
Wu, Margaret; Adams, Richard J
2013-01-01
This paper examines the residual-based fit statistics commonly used in Rasch measurement. In particular, the paper analytically examines some of the theoretical properties of the residual-based fit statistics with a view to establishing the inferences that can be made using these fit statistics. More specifically, the relationships between the distributional properties of the fit statistics and sample size are discussed; some research that erroneously concludes that residual-based fit statistics are unstable is reviewed; and finally, it is analytically illustrated that, for dichotomous items, residual-based fit statistics provide a measure of the relative slope of empirical item characteristic curves. With a clear understanding of the theoretical properties of the fit statistics, the use and limitations of these statistics can be placed in the right light.
Baldassarre, Maurizio; Li, Chenge; Eremina, Nadejda; Goormaghtigh, Erik; Barth, Andreas
2015-01-01
Infrared spectroscopy is a powerful tool in protein science due to its sensitivity to changes in secondary structure or conformation. In order to take advantage of the full power of infrared spectroscopy in structural studies of proteins, complex band contours, such as the amide I band, have to be decomposed into their main component bands, a process referred to as curve fitting. In this paper, we report on an improved curve fitting approach in which absorption spectra and second derivative spectra are fitted simultaneously. Our approach, which we name co-fitting, leads to a more reliable modelling of the experimental data because it uses more spectral information than the standard approach of fitting only the absorption spectrum. It also avoids that the fitting routine becomes trapped in local minima. We have tested the proposed approach using infrared absorption spectra of three mixed α/β proteins with different degrees of spectral overlap in the amide I region: ribonuclease A, pyruvate kinase, and aconitase. PMID:26184143
a Semiclassical Direct Potential Fitting Scheme for Diatomics
NASA Astrophysics Data System (ADS)
Tellinghuisen, J.
2011-06-01
For decades the standard procedure for obtaining diatomic potential curves from spectroscopic data involved fitting the data to expressions for the vibrational energy G_υ and rotational constant B_υ as functions of the vibrational quantum number υ, and then employing the RKR method to compute potential curves from these. Within the first-order semiclassical formalism of RKR, this "inversion" procedure is exact. However, the resulting potentials are limited in their quantum mechanical reliability, as has been demonstrated frequently by using the numerical Numerov method to compute the quantal properties of the potentials and comparing these with the starting spectroscopic information. A particularly troubling region is the repulsive wall of the potential near dissociation, where RKR curves often show unphysical wiggles and flares. Such behavior has generally been attributed to limitations inherent in the G_υ/B_υ-to-potential construct; by extension, such limitations are similarly responsible for some of the RKR-quantal disparities in other regions of the potential. In recent years this procedure has begun to be replaced by methods in which the potential curves are the directly fitted quantities in the least-squares analysis of the data, and the so-derived potentials often match the precision of the input data. The question naturally arises, how good might semiclassical methods be in a similar approach? Since the semiclassical calculations are two orders of magnitude faster than the quantal, they might be better than RKR for obtaining approximate potentials and could aid in deciding matters such as best functional forms and appropriate numbers of adjustable parameters. In this paper I will discuss tests of a semiclassical direct potential fitting method.
Gray, William G.; Miller, Cass T.
2010-01-01
This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model. PMID:21197134
Analysis of Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Erdem, A.; Budding, E.; Rhodes, M. D.; Püsküllü, Ç.; Soydugan, F.; Soydugan, E.; Tüysüz, M.; Demircan, O.
2015-07-01
We have applied the close binary system analysis package WINFITTER to a variety of exoplanet transiting light curves taken both from the NASA Exoplanet Archive and our own ground-based observations. WINFitter has parameter options for a realistic physical model, including gravity brightening and structural parameters derived from Kopal's applications of the relevant Radau equation, and it includes appropriate tests for determinacy and adequacy of its best fitting parameter sets. We discuss a number of issues related to empirical checking of models for stellar limb darkening, surface maculation, Doppler beaming, microvariability, and transit time variation (TTV) effects. The Radau coefficients used in the light curve modeling, in principle, allow structural models of the component stars to be tested.
Replication and Analysis of Ebbinghaus’ Forgetting Curve
Murre, Jaap M. J.; Dros, Joeri
2015-01-01
We present a successful replication of Ebbinghaus’ classic forgetting curve from 1880 based on the method of savings. One subject spent 70 hours learning lists and relearning them after 20 min, 1 hour, 9 hours, 1 day, 2 days, or 31 days. The results are similar to Ebbinghaus' original data. We analyze the effects of serial position on forgetting and investigate what mathematical equations present a good fit to the Ebbinghaus forgetting curve and its replications. We conclude that the Ebbinghaus forgetting curve has indeed been replicated and that it is not completely smooth but most probably shows a jump upwards starting at the 24 hour data point. PMID:26148023
Sontag, C A; Stafford, W F; Correia, J J
2004-03-01
Analysis of sedimentation velocity data for indefinite self-associating systems is often achieved by fitting of weight average sedimentation coefficients (s(20,w)) However, this method discriminates poorly between alternative models of association and is biased by the presence of inactive monomers and irreversible aggregates. Therefore, a more robust method for extracting the binding constants for indefinite self-associating systems has been developed. This approach utilizes a set of fitting routines (SedAnal) that perform global non-linear least squares fits of up to 10 sedimentation velocity experiments, corresponding to different loading concentrations, by a combination of finite element simulations and a fitting algorithm that uses a simplex convergence routine to search parameter space. Indefinite self-association is analyzed with the software program isodesfitter, which incorporates user provided functions for sedimentation coefficients as a function of the degree of polymerization for spherical, linear and helical polymer models. The computer program hydro was used to generate the sedimentation coefficient values for the linear and helical polymer assembly mechanisms. Since this curve fitting method directly fits the shape of the sedimenting boundary, it is in principle very sensitive to alternative models and the presence of species not participating in the reaction. This approach is compared with traditional fitting of weight average data and applied to the initial stages of Mg(2+)-induced tubulin self-associating into small curved polymers, and vinblastine-induced tubulin spiral formation. The appropriate use and limitations of the methods are discussed. PMID:15043931
A causal dispositional account of fitness.
Triviño, Vanessa; Nuño de la Rosa, Laura
2016-09-01
The notion of fitness is usually equated to reproductive success. However, this actualist approach presents some difficulties, mainly the explanatory circularity problem, which have lead philosophers of biology to offer alternative definitions in which fitness and reproductive success are distinguished. In this paper, we argue that none of these alternatives is satisfactory and, inspired by Mumford and Anjum's dispositional theory of causation, we offer a definition of fitness as a causal dispositional property. We argue that, under this framework, the distinctiveness that biologists usually attribute to fitness-namely, the fact that fitness is something different from both the physical traits of an organism and the number of offspring it leaves-can be explained, and the main problems associated with the concept of fitness can be solved. Firstly, we introduce Mumford and Anjum's dispositional theory of causation and present our definition of fitness as a causal disposition. We explain in detail each of the elements involved in our definition, namely: the relationship between fitness and the functional dispositions that compose it, the emergent character of fitness, and the context-sensitivity of fitness. Finally, we explain how fitness and realized fitness, as well as expected and realized fitness are distinguished in our approach to fitness as a causal disposition.
The measurement theory of fitness.
Wagner, Günter P
2010-05-01
In this article, an approach to measure fitness is proposed that considers fitness as a measure of competitive ability among phenotypes or genotypes. This approach is based on pairwise competition tests and is related to measures of "utility" in mathematical economics. Extending the results from utility theory it is possible to recover the classical Wrightian fitness measure without reference to models of population growth. A condition, quasi-BTL, similar to the Bradley-Terry-Luce condition of classical utility theory is shown to be necessary for the existence of frequency and context-independent fitness measures. Testing for violations of this quasi-BTL condition can be used to the detect genotype-by-genotype interactions and frequency-dependent fitness. A method for the detection of genotype by environment interactions is proposed that avoids potential scaling artifacts. Furthermore the measurement theoretical approach allows one to derive Wright's selection equation. This shows that classical selection equations are entirely general and exact. It is concluded that measurement theory is able to give definite answers to a number theoretical and practical questions. For instance, this theory identifies the correct scale for measuring gene interaction with respect to fitness and shows that different scales may lead to wrong conclusions. PMID:20002165
ERIC Educational Resources Information Center
Golding, Lawrence A.
1984-01-01
The YMCA has helped train and employ fitness leaders while educating the public on physical fitness. Colleges and universities can help develop careers in fitness while maintaining their traditional role of developing teachers and coaches. (DF)
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus
Initial Status in Growth Curve Modeling for Randomized Trials
Chou, Chih-Ping; Chi, Felicia; Weisner, Constance; Pentz, MaryAnn; Hser, Yih-Ing
2010-01-01
The growth curve modeling (GCM) technique has been widely adopted in longitudinal studies to investigate progression over time. The simplest growth profile involves two growth factors, initial status (intercept) and growth trajectory (slope). Conventionally, all repeated measures of outcome are included as components of the growth profile, and the first measure is used to reflect the initial status. Selection of the initial status, however, can greatly influence study findings, especially for randomized trials. In this article, we propose an alternative GCM approach involving only post-intervention measures in the growth profile and treating the first wave after intervention as the initial status. We discuss and empirically illustrate how choices of initial status may influence study conclusions in addressing research questions in randomized trials using two longitudinal studies. Data from two randomized trials are used to illustrate that the alternative GCM approach proposed in this article offers better model fitting and more meaningful results. PMID:21572585
Mixture Modeling of Individual Learning Curves
ERIC Educational Resources Information Center
Streeter, Matthew
2015-01-01
We show that student learning can be accurately modeled using a mixture of learning curves, each of which specifies error probability as a function of time. This approach generalizes Knowledge Tracing [7], which can be viewed as a mixture model in which the learning curves are step functions. We show that this generality yields order-of-magnitude…
A method for evaluating models that use galaxy rotation curves to derive the density profiles
NASA Astrophysics Data System (ADS)
de Almeida, Álefe O. F.; Piattella, Oliver F.; Rodrigues, Davi C.
2016-11-01
There are some approaches, either based on General Relativity (GR) or modified gravity, that use galaxy rotation curves to derive the matter density of the corresponding galaxy, and this procedure would either indicate a partial or a complete elimination of dark matter in galaxies. Here we review these approaches, clarify the difficulties on this inverted procedure, present a method for evaluating them, and use it to test two specific approaches that are based on GR: the Cooperstock-Tieu (CT) and the Balasin-Grumiller (BG) approaches. Using this new method, we find that neither of the tested approaches can satisfactorily fit the observational data without dark matter. The CT approach results can be significantly improved if some dark matter is considered, while for the BG approach no usual dark matter halo can improve its results.
ERIC Educational Resources Information Center
Erickson, Tim
2008-01-01
We often look for a best-fit function to a set of data. This article describes how a "pretty good" fit might be better than a "best" fit when it comes to promoting conceptual understanding of functions. In a pretty good fit, students design the function themselves rather than choosing it from a menu; they use appropriate variable names; and they…
ERIC Educational Resources Information Center
Valdes, Alice
This document presents baseline data on physical fitness that provides an outline for assessing the physical fitness of students. It consists of 4 tasks and a 13-item questionnaire on fitness-related behaviors. The fitness test evaluates cardiorespiratory endurance by a steady state jog; muscular strength and endurance with a two-minute bent-knee…
Ghasemzadeh, Nasim; Nyberg, Fred; Hjertén, Stellan
2008-12-01
High selectivity of a biomarker is a basic requirement when it is used for diagnosis, prognosis and treatment of a disease. The artificial gel antibodies, which we synthesise by a molecular imprinting method, have this property not only for proteins, but also for bioparticles, such as viruses and bacteria. However, diagnosis of a disease requires not only that the biomarker can be "fished out" from a body fluid with high selectivity, but also that its concentration in the sample can rapidly be determined and preferably by a simple technique. This paper deals primarily with the development of a spectrophotometric method, which is so simple and fast that it can be used with advantage in a Doctor's Office. The development of this method was not straight-forward. However, by modifications of the performance of these measurements we can now design standard curves in the form of a straight line, when we plot the true (not the recorded "apparent" absorption) against known protein concentrations. In an additional publication (see the following paper in this issue of JSS) we show an application of such a plot: determination of the concentration of albumin in serum and cerebrospinal fluid from patients with neurological disorders to investigate whether albumin is a biomarker for these diseases.
De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano
2012-11-21
Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.
Wullschleger, Stan D; Gu, Lianhong; Pallardy, Stephen G.; Tu, Kevin; Law, Beverly E.
2010-01-01
The Farquhar-von Caemmerer-Berry (FvCB) model of photosynthesis is a change-point model and structurally overparameterized for interpreting the response of leaf net assimilation (A) to intercellular CO{sub 2} concentration (Ci). The use of conventional fitting methods may lead not only to incorrect parameters but also several previously unrecognized consequences. For example, the relationships between key parameters may be fixed computationally and certain fits may be produced in which the estimated parameters result in contradictory identification of the limitation states of the data. Here we describe a new approach that is better suited to the FvCB model characteristics. It consists of four main steps: (1) enumeration of all possible distributions of limitation states; (2) fitting the FvCB model to each limitation state distribution by minimizing a distribution-wise cost function that has desirable properties for parameter estimation; (3) identification and correction of inadmissible fits; and (4) selection of the best fit from all possible limitation state distributions. The new approach implemented theoretical parameter resolvability with numerical procedures that maximally use the information content of the data. It was tested with model simulations, sampled A/Ci curves, and chlorophyll fluorescence measurements of different tree species. The new approach is accessible through the automated website leafweb.ornl.gov.
NASA Astrophysics Data System (ADS)
Coronado, Y.; López-Corona, O.; Mendoza, S.
2016-10-01
Knots or blobs observed in astrophysical jets are commonly interpreted as shock waves moving along them. Long-time observations of the HST-1 knot inside the jet of the galaxy M87 have produced detailed multiwavelength light curves. In this paper, we model these light curves using the semi-analytical approach developed by Mendoza et al. This model was developed to account for the light curves produced by working surfaces (blobs) moving along relativistic jets. These working surfaces are generated by periodic oscillations of the injected flow velocity and mass ejection rates at the base of the jet. Using genetic algorithms to fit the parameters of the model, we are able to explain the outbursts observed in the light curves of the HST-1 knot with an accuracy greater than a 2σ statistical confidence level.
Accelerating Around an Unbanked Curve
NASA Astrophysics Data System (ADS)
Mungan, Carl E.
2006-02-01
The December 2004 issue of TPT presented a problem concerning how a car should accelerate around an unbanked curve of constant radius r starting from rest if it is to avoid skidding. Interestingly enough, two solutions were proffered by readers.2 The purpose of this note is to compare and contrast the two approaches. Further experimental investigation of various turning strategies using a remote-controlled car and overhead video analysis could make for an interesting student project.
Nanstad, Randy K
2009-01-01
The precracked Charpy single-edge notched bend, SE(B), specimen (PCC) is the most likely specimen type to be used for determination of the reference temperature, T0, with reactor pressure vessel (RPV) surveillance specimens. Unfortunately, for many RPV steels, significant differences have been observed between the T0 temperature for the PCC specimen and that obtained from the 25-mm thick compact specimen [1TC(T)], generally considered the standard reference specimen for T0. This difference in T0 has often been designated a specimen bias effect, and the primary focus for explaining this effect is loss of constraint in the PCC specimen. The International Atomic Energy Agency (IAEA) has developed a coordinated research project (CRP) to evaluate various issues associated with the fracture toughness Master Curve for application to light-water RPVs. Topic Area 1 of the CRP is focused on the issue of test specimen geometry effects, with emphasis on determination of T0 with the PCC specimen and the bias effect. Topic Area 1 has an experimental part and an analytical part. Participating organizations for the experimental part of the CRP performed fracture toughness testing of various steels, including the reference steel JRQ (A533-B-1) often used for IAEA studies, with various types of specimens under various conditions. Additionally, many of the participants took part in a round robin exercise on finite element modeling of the PCVN specimen, discussed in a separate paper. Results from fracture toughness tests are compared with regard to effects of specimen size and type on the reference temperature T0. It is apparent from the results presented that the bias observed between the PCC specimen and larger specimens for Plate JRQ is not nearly as large as that obtained for Plate 13B (-11 C vs -37 C) and for some of the results in the literature (bias values as much as -45 C). This observation is consistent with observations in the literature that show significant variations in
Modelling of the toe trajectory during normal gait using circle-fit approximation.
Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan
2016-10-01
This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.
A new methodology for free wake analysis using curved vortex elements
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.
1987-01-01
A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.
ERIC Educational Resources Information Center
Nordmark, Arne; Essen, Hanno
2007-01-01
The equilibrium of a flexible inextensible string, or chain, in the centrifugal force field of a rotating reference frame is investigated. It is assumed that the end points are fixed on the rotation axis. The shape of the curve, the skipping rope curve or "troposkien", is given by the Jacobi elliptic function sn. (Contains 3 figures.)
Anodic Polarization Curves Revisited
ERIC Educational Resources Information Center
Liu, Yue; Drew, Michael G. B.; Liu, Ying; Liu, Lin
2013-01-01
An experiment published in this "Journal" has been revisited and it is found that the curve pattern of the anodic polarization curve for iron repeats itself successively when the potential scan is repeated. It is surprising that this observation has not been reported previously in the literature because it immediately brings into…
NASA Technical Reports Server (NTRS)
Dellacorte, Christopher; Howard, S. Adam
2015-01-01
Ball bearings require proper fit and installation into machinery structures (onto shafts and into bearing housings) to ensure optimal performance. For some applications, both the inner and outer race must be mounted with an interference fit and care must be taken during assembly and disassembly to avoid placing heavy static loads between the balls and races otherwise Brinell dent type damage can occur. In this paper, a highly dent resistant superelastic alloy, 60NiTi, is considered for rolling element bearing applications that encounter excessive static axial loading during assembly or disassembly. A small (R8) ball bearing is designed for an application in which access to the bearing races to apply disassembly tools is precluded. First Principles analyses show that by careful selection of materials, raceway curvature and land geometry, a bearing can be designed that allows blind assembly and disassembly without incurring raceway damage due to ball denting. Though such blind assembly applications are uncommon, the availability of bearings with unusually high static load capability may enable more such applications with additional benefits, especially for miniature bearings.
Alia, Kassandra; Wilson, Dawn K.; McDaniel, Tyler; St. George, Sara M.; Kitzman-Ulrich, Heather; Smith, Kelsey; Heatley, VaShawn; Wise, Courtney
2015-01-01
This study demonstrates how a multi-theoretical, multilevel process evaluation was used to assess implementation of Families Improving Together (FIT) for weight loss intervention. FIT is a randomized controlled trial evaluating a culturally tailored, motivational plus family-based program on weight loss in African American adolescents and their parents. Social Cognitive, Self Determination, Family Systems theories and cultural tailoring principles guided the conceptualization of essential elements across individual/family, facilitator, and group levels. Data collection included an observational rating tool, attendance records, and a validated psychosocial measure. Results. Attendance records (0=absent, 1=present, criteria=≥70%) indicated that 71.5% of families attended each session. The survey (1=false, 6=true, criteria=≥4.5) indicated that participants perceived a positive group climate (M=5.16, SD=.69). A trained evaluator reported that facilitator dose delivered (0=no, 1=yes, criteria=≥75%) was high (99.6%), and fidelity (1=none to 4=all, criteria=≥3) was adequate at facilitator (M=3.63, SD=.41) and group levels (M=3.35, SD=.49). Five cultural topics were raised by participants related to eating (n=3) and physical activity (n=2) behaviors and were integrated as part of the final curriculum. Discussion. Results identify areas for program improvement related to delivery of multi-theoretical and cultural tailoring elements. Findings may inform future strategies for implementing effective weight loss programs for ethnic minority families. PMID:25614139
Johnson, James H.; McKenna, James E.; Dropkin, David S.; Andrews, William D.
2005-01-01
We examined the growth characteristics of 303 Atlantic sturgeon, Acipenser oxyrinchus, caught in the commercial fishery off the New Jersey coast from 1992 to 1994 (fork length range: 93–219 cm). Sections taken from the leading pectoral fin ray were used to age each sturgeon. Ages ranged from 5–26 years. Von Bertalanffy growth models for males and females fit well, but test statistics (t-test, maximum likelihood) failed to reject the null hypothesis that growth was not significantly different between sexes. Consequently, all data were pooled and the combined data gave L∞ and K estimates of 174.2 cm and 0.144, respectively. Our growth data do not fit the pattern of slower growth and increased size in more northernly latitudes for Atlantic sturgeon observed in other work. Lack of uniformity of our growth data may be due to (1) the sturgeon fishery harvesting multiple stocks having different growth rates, and (2) size limits for the commercial fishery having created a bias in estimating growth parameters.
FIT3D: Fitting optical spectra
NASA Astrophysics Data System (ADS)
Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.
2016-09-01
FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.
... Exercise Current Sports Medicine Reports Exercise and Sport Sciences Reviews ACSM's Health & Fitness Journal Guidelines Books & Multimedia Sports Medicine Basics Fact Sheets Sports Medicine & Physical Activity Marketplace Health & Physical Activity Reference Database Fit ...
ERIC Educational Resources Information Center
Grosse, Susan J.
2009-01-01
This article discusses how families can increase family togetherness and improve physical fitness. The author provides easy ways to implement family friendly activities for improving and maintaining physical health. These activities include: walking, backyard games, and fitness challenges.
Taxonomic level as a determinant of the shape of the Phanerozoic marine biodiversity curve.
Lane, Abigail; Benton, Michael J
2003-09-01
Key aims of recent paleobiological research have been the construction of Phanerozoic global biodiversity patterns and the formulation of models and mechanisms of diversification describing such patterns. Two conflicting theories of global diversification have been equilibrium versus expansionist growth of taxonomic diversity. These models, however, rely on accurate empirical data curves, and it is not clear to what extent the taxonomic level at which the data are analyzed controls the resulting pattern. Global Phanerozoic marine diversity curves are constructed at ordinal, familial, and generic levels using several fossil-range data sets. The fit of a single logistic model reduces from ordinal through familial to generic level, while conversely, that of an exponential growth model increases. Three sequential logistic equations, fitted to three time periods during which diversity appears to approach or reach an equilibrium state, provide the best description of the data at familial and generic levels. However, an exponential growth curve describes the diversification of marine life since the end-Permian extinction equally as well as a logistic. A species-level model of global Phanerozoic marine diversification, constructed by extrapolation of the trends from familial to generic level, suggests growth in numbers of marine species was broadly exponential. When smaller subsets of the data are analyzed, the effect of taxonomic level on the shape of the diversity curve becomes more pronounced. In the absence of species data, a consistent signal at more than one higher taxonomic level is required to predict a species-level pattern. PMID:12970836
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1988-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Chalam, K V; Shah, Vinay A; Tripathi, Ramesh C
2004-01-01
A curved vitrectomy probe for better accessibility of the peripheral retina in phakic eyes is described. The specially designed curved vitrectomy probe has a 20-gauge pneumatic cutter. The radius of curvature at the shaft is 19.4 mm and it is 25 mm long. The ora serrata is accessed through a 3.0- or 4.0-mm sclerotomy in phakic eyes without touching the crystalline lens. Use of this instrument avoids inadvertent trauma to the clear lens in phakic eyes requiring vitreous base excision. This curved vitrectomy instrument complements wide-angle viewing systems and endoscopes for safe surgical treatment of peripheral retinal pathology in phakic eyes. PMID:15185799
ERIC Educational Resources Information Center
Corradini, Deedee
1999-01-01
Too many U.S. children are out of shape. Parents must help them learn to improve their fitness by exercising with them. The U.S. Conference of Mayors recently made physical fitness of the nation's children a primary emphasis. A sidebar presents information on how to contact local mayors to start up programs to help children improve their fitness.…
NASA Astrophysics Data System (ADS)
Wassermann, J. M.; Wietek, A.; Hadziioannou, C.; Igel, H.
2014-12-01
Microzonation, i.e. the estimation of (shear) wave velocity profiles of the upper few 100m in dense 2D surface grids is one of the key methods to understand the variation in seismic hazard caused by ground shaking events. In this presentation we introduce a novel method for estimating the Love-wave phase velocity dispersion by using ambient noise recordings. We use the vertical component of rotational motions inherently present in ambient noise and the well established relation to simultaneous recordings of transverse acceleration. In this relation the frequency dependent phase velocity of a plane SH (or Love)-type wave acts as a proportionality factor between the anti-correlated amplitudes of both measures. In a first step we used synthetic data sets with increasing complexity to evaluate the proposed technique and the developed algorithm to extract the direction and amplitude of the incoming ambient noise wavefield measured at a single site. Since reliable weak rotational motion sensors are not yet readily available, we apply array derived rotation measurements in order to test our method. We next use the technique to analyze different real data sets of ambient noise measurements as well as seismic recordings at active volcanoes and compare these results with findings of the Spatial AutoCorrelation technique which was applied to the same data set. We demonstrate that the newly developed technique shows comparable results to more classical, strictly array based methods. Furthermore, we show that as soon as portable weak motion rotational motion sensors are available, a single 6C-station approach will be feasible, not only for microzonation but also for general array applications, with performance comparable to more classical techniques. An important advantage, especially in urban environments, is that with this approach, the number of seismic stations needed is drastically reduced.
Optical conductivity of curved graphene.
Chaves, A J; Frederico, T; Oliveira, O; de Paula, W; Santos, M C
2014-05-01
We compute the optical conductivity for an out-of-plane deformation in graphene using an approach based on solutions of the Dirac equation in curved space. Different examples of periodic deformations along one direction translates into an enhancement of the optical conductivity peaks in the region of the far- and mid-infrared frequencies for periodicities ∼100 nm. The width and position of the peaks can be changed by dialling the parameters of the deformation profiles. The enhancement of the optical conductivity is due to intraband transitions and the translational invariance breaking in the geometrically deformed background. Furthermore, we derive an analytical solution of the Dirac equation in a curved space for a general deformation along one spatial direction. For this class of geometries, it is shown that curvature induces an extra phase in the electron wave function, which can also be explored to produce interference devices of the Aharonov-Bohm type.
Quasispecies on Fitness Landscapes.
Schuster, Peter
2016-01-01
Selection-mutation dynamics is studied as adaptation and neutral drift on abstract fitness landscapes. Various models of fitness landscapes are introduced and analyzed with respect to the stationary mutant distributions adopted by populations upon them. The concept of quasispecies is introduced, and the error threshold phenomenon is analyzed. Complex fitness landscapes with large scatter of fitness values are shown to sustain error thresholds. The phenomenological theory of the quasispecies introduced in 1971 by Eigen is compared to approximation-free numerical computations. The concept of strong quasispecies understood as mutant distributions, which are especially stable against changes in mutations rates, is presented. The role of fitness neutral genotypes in quasispecies is discussed.
ERIC Educational Resources Information Center
Marini, Isabella
2005-01-01
Human salivary [alpha]-amylase is used in this experimental approach to introduce biology high school students to the concept of enzyme activity in a dynamic way. Through a series of five easy, rapid, and inexpensive laboratory experiments students learn what the activity of an enzyme consists of: first in a qualitative then in a semi-quantitative…
What Current Research Tells Us About Physical Fitness for Children.
ERIC Educational Resources Information Center
Cundiff, David E.
The author distinguishes between the terms "physical fitness" and "motor performance," summarizes the health and physical status of adults, surveys the physical fitness status of children, and proposes a lifestyle approach to the development and lifetime maintenance of health and physical fitness. The distinctions between "physical fitness" as…
The Very Essentials of Fitness for Trial Assessment in Canada
ERIC Educational Resources Information Center
Newby, Diana; Faltin, Robert
2008-01-01
Fitness for trial constitutes the most frequent referral to forensic assessment services. Several approaches to this evaluation exist in Canada, including the Fitness Interview Test and Basic Fitness for Trial Test. The following article presents a review of the issues and a method for basic fitness for trial evaluation.
A new curriculum for fitness education.
Boone, J L
1983-01-01
Regular exercise is important in a preventive approach to health care because it exerts a beneficial effect on many risk factors in the development of coronary heart disease. However, many Americans lack the skills required to devise and carry out a safe and effective exercise program appropriate for a life-time of fitness. This inability is partly due to the lack of fitness education during their school years. School programs in physical education tend to neglect training in the health-related aspects of fitness. Therefore, a new curriculum for fitness education is proposed that would provide seventh, eighth, and ninth grade students with (a) a basic knowledge of their physiological response to exercise, (b) the means to develop their own safe and effective physical fitness program, and (c) the motivation to incorporate regular exercise into their lifestyle. This special 4-week segment of primarily academic study is designed to be inserted into the physical education curriculum. Daily lessons cover health-related fitness, cardiovascular fitness, body fitness, and care of the back. A final written examination covering major areas of information is given to emphasize this academic approach to exercise. Competition in athletic ability is deemphasized, and motivational awards are given based on health-related achievements. The public's present lack of knowledge about physical fitness, coupled with the numerous anatomical and physiological benefits derived from regular, vigorous exercise, mandate an intensified curriculum of fitness education for school children. PMID:6414039
UBVRI Photometry of Mecury's Integral Phase Curve
NASA Astrophysics Data System (ADS)
Bergfors, Carolina; Warell, J.
2007-10-01
We present results from a photometric survey of Mecury's survey of Mecury's integral phase curve in the Johnson UBVRI system, obtained with the 0.9-m Westerlund Telescope in Uppsala, Sweden. CCD observations of the integrated disk have been obtained for the phase angle range 22-152 degrees. This is the first integral phase curve survey covering the extended visible spectrum of Mecury. We have derived absolute magnitudes and color indices from which Bond albedo, geometric albedo and phase integral have been determined for all bands. We have fitted the data in each band with the Mallama et a. (2002) V-band phase curve which is based on a wider range of more densely sampled phase angles. A magnitude-scaled Mallama phase curve provided adequate fits for each band within the photometric error budget. This implies no evidence of phase reddening in any color, which is in contrast to the Moon. The phase reddening for the Moon was determined by Lane and Irvine (1973) to 0.001 mag/degree between the wavelengths 445 nm and 550 nm. These data will be modeled to derive light scattering properties of the regolith. Of particular interest is the prediction of a disk-averaged normal albedo at 1064 nm with implications for the returned signal strength of the BepiColombo laser altimeter BELA (Gunderson et al, 2006).
Study of galactic rotation curves in wormhole spacetime
NASA Astrophysics Data System (ADS)
Rahaman, Farook; Sen, Banashree; Chakraborty, Koushik; Shit, G. C.
2016-03-01
The spacetime of the galactic halo region is described by a wormhole like line element. We assume violation of Null Energy Condition (NEC) in the galactic halo. The Einstein Field equations are solved for two different conditions of pressure and density to obtain physical parameters like tangential velocity of test particles and parameters related to the wormhole geometry. The theoretical rotation curve of the test particles is plotted and compared the same with an observed rotation curve. We obtain a satisfactory fit between the observed curve and the curve obtained from the present theory for the radial distances in the range 9 Kpc to 100 Kpc.
Mathematics analysis of polymerase chain reaction kinetic curves.
Sochivko, D G; Fedorov, A A; Varlamov, D A; Kurochkin, V E; Petrov, R V
2016-01-01
The paper reviews different approaches to the mathematical analysis of polymerase chain reaction (PCR) kinetic curves. The basic principles of PCR mathematical analysis are presented. Approximation of PCR kinetic curves and PCR efficiency curves by various functions is described. Several PCR models based on chemical kinetics equations are suggested. Decision criteria for an optimal function to describe PCR efficiency are proposed.
NASA Astrophysics Data System (ADS)
Tiilikainen, J.; Tilli, J.-M.; Bosund, V.; Mattila, M.; Hakkarainen, T.; Airaksinen, V.-M.; Lipsanen, H.
2007-01-01
Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present.
Pickett, P.T.
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Pickett, Patrick T.
1981-01-01
A hollow fitting for use in gas spectrometry leak testing of conduit joints is divided into two generally symmetrical halves along the axis of the conduit. A clip may quickly and easily fasten and unfasten the halves around the conduit joint under test. Each end of the fitting is sealable with a yieldable material, such as a piece of foam rubber. An orifice is provided in a wall of the fitting for the insertion or detection of helium during testing. One half of the fitting also may be employed to test joints mounted against a surface.
Leslie, Mark; Holloway, Charles A
2006-01-01
When a company launches a new product into a new market, the temptation is to immediately ramp up sales force capacity to gain customers as quickly as possible. But hiring a full sales force too early just causes the firm to burn through cash and fail to meet revenue expectations. Before it can sell an innovative product efficiently, the entire organization needs to learn how customers will acquire and use it, a process the authors call the sales learning curve. The concept of a learning curve is well understood in manufacturing. Employees transfer knowledge and experience back and forth between the production line and purchasing, manufacturing, engineering, planning, and operations. The sales learning curve unfolds similarly through the give-and-take between the company--marketing, sales, product support, and product development--and its customers. As customers adopt the product, the firm modifies both the offering and the processes associated with making and selling it. Progress along the manufacturing curve is measured by tracking cost per unit: The more a firm learns about the manufacturing process, the more efficient it becomes, and the lower the unit cost goes. Progress along the sales learning curve is measured in an analogous way: The more a company learns about the sales process, the more efficient it becomes at selling, and the higher the sales yield. As the sales yield increases, the sales learning process unfolds in three distinct phases--initiation, transition, and execution. Each phase requires a different size--and kind--of sales force and represents a different stage in a company's production, marketing, and sales strategies. Adjusting those strategies as the firm progresses along the sales learning curve allows managers to plan resource allocation more accurately, set appropriate expectations, avoid disastrous cash shortfalls, and reduce both the time and money required to turn a profit.
Do the Kepler AGN light curves need reprocessing?
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.; Williams, Joshua; Carini, Michael T.
2015-10-01
We gauge the impact of spacecraft-induced effects on the inferred variability properties of the light curve of the Seyfert 1 AGN Zw 229-15 observed by Kepler. We compare the light curve of Zw 229-15 obtained from the Kepler MAST data base with a reprocessed light curve constructed from raw pixel data. We use the first-order structure function, SF(δt), to fit both light curves to the damped power-law PSD (power spectral density) of Kasliwal et al. On short time-scales, we find a steeper log PSD slope (γ = 2.90 to within 10 per cent) for the reprocessed light curve as compared to the light curve found on MAST (γ = 2.65 to within 10 per cent) - both inconsistent with a damped random walk (DRW) which requires γ = 2. The log PSD slope inferred for the reprocessed light curve is consistent with previous results that study the same reprocessed light curve. The turnover time-scale is almost identical for both light curves (27.1 and 27.5 d for the reprocessed and MAST data base light curves). Based on the obvious visual difference between the two versions of the light curve and on the PSD model fits, we conclude that there remain significant levels of spacecraft-induced effects in the standard pipeline reduction of the Kepler data. Reprocessing the light curves will change the model inferenced from the data but is unlikely to change the overall scientific conclusions reached by Kasliwal et al. - not all AGN light curves are consistent with the DRW.
FPGA curved track fitter with very low resource usage
Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Shi, Z.; /Fermilab
2006-11-01
Standard least-squares curved track fitting process is tailored for FPGA implementation. The coefficients in the fitting matrices are carefully chosen so that only shift and accumulation operations are used in the process. The divisions and full multiplications are eliminated. Comparison in an application example shows that the fitting errors of the low resource usage implementation are less than 4% bigger than the fitting errors of the exact least-squares algorithm. The implementation is suitable for low-cost, low-power applications such as high energy physics detector trigger systems.
Molecular dynamics simulations of the melting curve of NiAl alloy under pressure
Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli
2014-05-15
The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801){sup 0.298} (one-phase approach), 1850(1 + P/12.806){sup 0.357} (two-phase approach). The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment.
Textbook Factor Demand Curves.
ERIC Educational Resources Information Center
Davis, Joe C.
1994-01-01
Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)
ERIC Educational Resources Information Center
Lawes, Jonathan F.
2013-01-01
Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…
ERIC Educational Resources Information Center
Paulton, Richard J. L.
1991-01-01
A procedure that allows students to view an entire bacterial growth curve during a two- to three-hour student laboratory period is described. Observations of the lag phase, logarithmic phase, maximum stationary phase, and phase of decline are possible. A nonpathogenic, marine bacterium is used in the investigation. (KR)
Comparing Item Characteristic Curves.
ERIC Educational Resources Information Center
Rosenbaum, Paul R.
1987-01-01
This paper develops and applies three nonparametric comparisons of the shapes of two item characteristic surfaces: (1) proportional latent odds; (2) uniform relative difficulty; and (3) item sensitivity. A method is presented for comparing the relative shapes of two item characteristic curves in two examinee populations who were administered an…
Straightening Out Learning Curves
ERIC Educational Resources Information Center
Corlett, E. N.; Morecombe, V. J.
1970-01-01
The basic mathematical theory behind learning curves is explained, together with implications for clerical and industrial training, evaluation of skill development, and prediction of future performance. Brief studies of textile worker and typist training are presented to illustrate such concepts as the reduction fraction (a consistent decrease in…
Physical Fitness and Counseling.
ERIC Educational Resources Information Center
Helmkamp, Jill M.
Human beings are a delicate balance of mind, body, and spirit, so an imbalance in one domain affects all others. The purpose of this paper is to examine the effects that physical fitness may have on such human characteristics as personality and behavior. A review of the literature reveals that physical fitness is related to, and can affect,…
ERIC Educational Resources Information Center
Williams, Neil F.; Germain, Jenna
2008-01-01
Physical fitness activities are often viewed as monotonous and tedious, so they fail to motivate students to become more physically active. This tedium could be relieved by using a "learning as play" strategy, widely used in other academic disciplines. This article describes how to incorporate fitness into a variety of games so that students do…
ERIC Educational Resources Information Center
Nordholm, Catherine R.
This document makes a number of observations about physical fitness in America. Among them are: (1) the symptoms of aging (fat accumulation, lowered basal metabolic rate, loss of muscular strength, reduction in motor fitness, reduction in work capacity, etc.) are not the result of disease but disuse; (2) society conditions the individual to…
ERIC Educational Resources Information Center
Farrell, Anne; Faigenbaum, Avery; Radler, Tracy
2010-01-01
The urgency to improve fitness levels and decrease the rate of childhood obesity has been at the forefront of physical education philosophy and praxis. Few would dispute that school-age youth need to participate regularly in physical activities that enhance and maintain both skill- and health-related physical fitness. Regular physical activity…
ERIC Educational Resources Information Center
Klahr, Gary Peter
1992-01-01
Although the 1980's fitness craze is wearing off and adults are again becoming "couch potatoes," this trend does not justify expansion of high school compulsory physical education requirements. To encourage commitment to lifetime physical fitness, the Phoenix (Arizona) Union High School District offers students private showers, relaxed uniform…
ERIC Educational Resources Information Center
Hennyey, Donna J.
1985-01-01
Factors contributing to the evolution of fitness are discussed, and some of the challenges these hold for those in the fields of food and nutrition are identified. This includes a discussion of basic concepts of nutrition and exercise, misconceptions of nutrition and exercise, and fitness instructors as nutrition educators. (Author/CT)
ERIC Educational Resources Information Center
Swoyer, Jesse O.
2008-01-01
The author, who has been a personal trainer for the past ten years, recently realized that all fitness centers are not equal. In February, he was able to participate in the grand opening of the Center for Independent Living of Central PA (CILCP), a fitness center that is designed to accommodate persons with disabilities living in the Central…
ERIC Educational Resources Information Center
Maiorano, Joseph J.
2001-01-01
Fit 2-B FATHERS is a parenting-skills education program for incarcerated adult males. The goals of this program are for participants to have reduced recidivism rates and a reduced risk of their children acquiring criminal records. These goals are accomplished by helping participants become physically, practically, and socially fit for the demands…
Computer program for fitting low-order polynomial splines by method of least squares
NASA Technical Reports Server (NTRS)
Smith, P. J.
1972-01-01
FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.
Trend analyses with river sediment rating curves
Warrick, Jonathan A.
2015-01-01
Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.
Surface fitting three-dimensional bodies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1974-01-01
The geometry of general three-dimensional bodies is generated from coordinates of points in several cross sections. Since these points may not be smooth, they are divided into segments and general conic sections are curve fit in a least-squares sense to each segment of a cross section. The conic sections are then blended in the longitudinal direction by fitting parametric cubic-spline curves through coordinate points which define the conic sections in the cross-sectional planes. Both the cross-sectional and longitudinal curves may be modified by specifying particular segments as straight lines and slopes at selected points. Slopes may be continuous or discontinuous and finite or infinite. After a satisfactory surface fit has been obtained, cards may be punched with the data necessary to form a geometry subroutine package for use in other computer programs. At any position on the body, coordinates, slopes and second partial derivatives are calculated. The method is applied to a blunted 70 deg delta wing, and it was found to generate the geometry very well.
Analysis of selected Kepler Mission planetary light curves
NASA Astrophysics Data System (ADS)
Rhodes, M. D.; Budding, E.
2014-06-01
We have modified the graphical user interfaced close binary system analysis program CurveFit to the form WinKepler and applied it to 16 representative planetary candidate light curves found in the NASA Exoplanet Archive (NEA) at the Caltech website http://exoplanetarchive.ipac.caltech.edu, with an aim to compare different analytical approaches. WinKepler has parameter options for a realistic physical model, including gravity-brightening and structural parameters derived from the relevant Radau equation. We tested our best-fitting parameter-sets for formal determinacy and adequacy. A primary aim is to compare our parameters with those listed in the NEA. Although there are trends of agreement, small differences in the main parameter values are found in some cases, and there may be some relative bias towards a 90∘ value for the NEA inclinations. These are assessed against realistic error estimates. Photometric variability from causes other than planetary transits affects at least 6 of the data-sets studied; with small pulsational behaviour found in 3 of those. For the false positive KOI 4.01, we found that the eclipses could be modelled by a faint background classical Algol as effectively as by a transiting exoplanet. Our empirical checks of limb-darkening, in the cases of KOI 1.01 and 12.01, revealed that the assigned stellar temperatures are probably incorrect. For KOI 13.01, our empirical mass-ratio differs by about 7 % from that of Mislis and Hodgkin (Mon. Not. R. Astron. Soc. 422:1512, 2012), who neglected structural effects and higher order terms in the tidal distortion. Such detailed parameter evaluation, additional to the usual main geometric ones, provides an additional objective for this work.
Investigation of learning and experience curves
Krawiec, F.; Thornton, J.; Edesess, M.
1980-04-01
The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.
Prediction and extension of curves of distillation of vacuum residue using probability functions
NASA Astrophysics Data System (ADS)
León, A. Y.; Riaño, P. A.; Laverde, D.
2016-02-01
The use of the probability functions for the prediction of crude distillation curves has been implemented in different characterization studies for refining processes. The study of four functions of probability (Weibull extreme, Weibull, Kumaraswamy and Riazi), was analyzed in this work for the fitting of curves of distillation of vacuum residue. After analysing the experimental data was selected the Weibull extreme function as the best prediction function, the fitting capability of the best function was validated considering as criterions of estimation the AIC (Akaike Information Criterion), BIC (Bayesian information Criterion), and correlation coefficient R2. To cover a wide range of composition were selected fifty-five (55) vacuum residue derived from different hydrocarbon mixture. The parameters of the probability function Weibull Extreme were adjusted from simple measure properties such as Conradson Carbon Residue (CCR), and compositional analysis SARA (saturates, aromatics, resins and asphaltenes). The proposed method is an appropriate tool to describe the tendency of distillation curves and offers a practical approach in terms of classification of vacuum residues.
The learning curves of competitive programming
NASA Astrophysics Data System (ADS)
Garcia, Jose R.; Aguirre, Vanessa E.
2014-10-01
Universities around the world have implemented competitive programming as an approach to teach computer science courses. They have empirically validated this approach as a successful pedagogical tool. However, there are no conclusive results that describe the degree in which competitive programming affects the learning process of the students. In this paper, we report on the learning curves obtained from analyzing ten years of TopCoder algorithm competitions. We discuss on how these learning curves apply to university courses and can help us explain the influence of competitive programming in a class.
Factorization with genus 2 curves
NASA Astrophysics Data System (ADS)
Cosset, Romain
2010-04-01
The elliptic curve method (ECM) is one of the best factorization methods available. It is possible to use hyperelliptic curves instead of elliptic curves but it is in theory slower. We use special hyperelliptic curves and Kummer surfaces to reduce the complexity of the algorithm. Our implementation GMP-HECM is faster than GMP-ECM for factoring large numbers.
Gottschlich, Carsten
2012-04-01
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Limitations of inclusive fitness.
Allen, Benjamin; Nowak, Martin A; Wilson, Edward O
2013-12-10
Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.
Limitations of inclusive fitness
Allen, Benjamin; Nowak, Martin A.; Wilson, Edward O.
2013-01-01
Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed. PMID:24277847
AN Fitting Reconditioning Tool
NASA Technical Reports Server (NTRS)
Lopez, Jason
2011-01-01
A tool was developed to repair or replace AN fittings on the shuttle external tank (ET). (The AN thread is a type of fitting used to connect flexible hoses and rigid metal tubing that carry fluid. It is a U.S. military-derived specification agreed upon by the Army and Navy, hence AN.) The tool is used on a drill and is guided by a pilot shaft that follows the inside bore. The cutting edge of the tool is a standard-size replaceable insert. In the typical Post Launch Maintenance/Repair process for the AN fittings, the six fittings are removed from the ET's GUCP (ground umbilical carrier plate) for reconditioning. The fittings are inspected for damage to the sealing surface per standard operations maintenance instructions. When damage is found on the sealing surface, the condition is documented. A new AN reconditioning tool is set up to cut and remove the surface damage. It is then inspected to verify the fitting still meets drawing requirements. The tool features a cone-shaped interior at 36.5 , and may be adjusted at a precise angle with go-no-go gauges to insure that the cutting edge could be adjusted as it wore down. One tool, one setting block, and one go-no-go gauge were fabricated. At the time of this reporting, the tool has reconditioned/returned to spec 36 AN fittings with 100-percent success of no leakage. This tool provides a quick solution to repair a leaky AN fitting. The tool could easily be modified with different-sized pilot shafts to different-sized fittings.
Global Expression for Representing Diatomic Potential-Energy Curves
NASA Technical Reports Server (NTRS)
Ferrante, John; Schlosser, Herbert; Smith, John R.
1991-01-01
A three-parameter expression that gives an accurate fit to diatomic potential curves over the entire range of separation for charge transfers between 0 and 1. It is based on a generalization of the universal binding-energy relation of Smith et al. (1989) with a modification that describes the crossover from a partially ionic state to the neutral state at large separations. The expression is tested by comparison with first-principles calculations of the potential curves ranging from covalently bonded to ionically bonded. The expression is also used to calculate spectroscopic constants form a curve fit to the first-principles curves. A comparison is made with experimental values of the spectroscopic constants.
Flared tube attachment fitting
NASA Technical Reports Server (NTRS)
Alkire, I. D.; King, J. P., Jr.
1980-01-01
Tubes can be flared first, then attached to valves and other flow line components, with new fitting that can be disassembled and reused. Installed fitting can be disassembled so parts can be inspected. It can be salvaged and reused without damaging flared tube; tube can be coated, tempered, or otherwise treated after it has been flared, rather than before, as was previously required. Fitting consists of threaded male portion with conical seating surface, hexagonal nut with hole larger than other diameter of flared end of tube, and split ferrule.
NASA Astrophysics Data System (ADS)
Brandenburg, J. P.
2013-08-01
Fault-propagation folds form an important trapping element in both onshore and offshore fold-thrust belts, and as such benefit from reliable interpretation. Building an accurate geologic interpretation of such structures requires palinspastic restorations, which are made more challenging by the interplay between folding and faulting. Trishear (Erslev, 1991; Allmendinger, 1998) is a useful tool to unravel this relationship kinematically, but is limited by a restriction to planar fault geometries, or at least planar fault segments. Here, new methods are presented for trishear along continuously curved reverse faults defining a flat-ramp transition. In these methods, rotation of the hanging wall above a curved fault is coupled to translation along a horizontal detachment. Including hanging wall rotation allows for investigation of structures with progressive backlimb rotation. Application of the new algorithms are shown for two fault-propagation fold structures: the Turner Valley Anticline in Southwestern Alberta, and the Alpha Structure in the Niger Delta.
NASA Astrophysics Data System (ADS)
Chamidah, Nur; Rifada, Marisa
2016-03-01
There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.
Learning curves in health professions education.
Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A
2015-08-01
Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design. PMID:25806621
Learning curves in health professions education.
Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A
2015-08-01
Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.
Integrating the Levels of Person-Environment Fit: The Roles of Vocational Fit and Group Fit
ERIC Educational Resources Information Center
Vogel, Ryan M.; Feldman, Daniel C.
2009-01-01
Previous research on fit has largely focused on person-organization (P-O) fit and person-job (P-J) fit. However, little research has examined the interplay of person-vocation (P-V) fit and person-group (P-G) fit with P-O fit and P-J fit in the same study. This article advances the fit literature by examining these relationships with data collected…
Rotation curves of ultralight BEC dark matter halos with rotation
NASA Astrophysics Data System (ADS)
Guzmán, F. S.; Lora-Clavijo, F. D.
2015-03-01
We study the rotation curves of ultralight BEC dark matter halos. These halos are long lived solutions of initially rotating BEC fluctuations. In order to study the implications of the rotation characterizing these long-lived configurations we consider the particular case of a boson mass and no self-interaction. We find that these halos successfully fit samples of rotation curves of LSB galaxies.
Transit Model Fitting in the Kepler Science Operations Center Pipeline
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, C. J.; Jenkins, J. M.; Quintana, E. V.; Rowe, J. F.; Seader, S. E.; Tenenbaum, P.; Twicken, J. D.
2012-05-01
We describe the algorithm and performance of the transit model fitting of the Kepler Science Operations Center (SOC) Pipeline. Light curves of long cadence targets are subjected to the Transiting Planet Search (TPS) component of the Kepler SOC Pipeline. Those targets for which a Threshold Crossing Event (TCE) is generated in the transit search are subsequently processed in the Data Validation (DV) component. The light curves may span one or more Kepler observing quarters, and data may not be available for any given target in all quarters. Transit model parameters are fitted in DV to transit-like signatures in the light curves of target stars with TCEs. The fitted parameters are used to generate a predicted light curve based on the transit model. The residual flux time series of the target star, with the predicted light curve removed, is fed back to TPS to search for additional TCEs. The iterative process of transit model fitting and transiting planet search continues until no TCE is generated from the residual flux time series or a planet candidate limit is reached. The transit model includes five parameters to be fitted: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. The initial values of the fit parameters are determined from the TCE values provided by TPS. A limb darkening model is included in the transit model to generate the predicted light curve. The transit model fitting results are used in the diagnostic tests in DV, such as the centroid motion test, eclipsing binary discrimination tests, etc., which helps to validate planet candidates and identify false positive detections. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Numerical reconstruction of spectral reflectance curves of oil painting on canvas
NASA Astrophysics Data System (ADS)
Valdivieso, L. G.; Osorio, C. A.; Guerrero, J. E.
2011-08-01
Unlike the color -which is a quality without physical meaning and that involves subjective estimation of interaction of electromagnetic radiation with surfaces- spectral reflectance is a physical property that characterizes different materials, no matter what chromatic content of illuminant and the spectral response of the sensor. This means that the spectral reflectance is a magnitude of particular interes in both reconstruction and reproduction in digital color systems. In this paper, two approaches to the numerical reconstruction of spectral reflectance curves of samples of oil painting on canvas, are presented. These approaches need a set of spectral reflectance curves, given by a spectrophotometer, and their respective sampling using color filters placed in front of a monochrome CCD camera. The first approach is based on the interpolation of the camera response to each color filter. The second one, relies in obtain a vectorial base and appropiate coefficients to reconstruct the spectral reflectance curve. Goodness of fit coefficient (GFC) and absolute mean error (ABE) are the metrics used to evaluate the performance of the proposed procedures.
ERIC Educational Resources Information Center
Simmons, Richard
1986-01-01
An excerpt is presented from a book offering fitness exercises for people with disabilities. The author reviews specific medical considerations of Down's Syndrome and examines nutritional concerns as well as precautions for a program of general exercise. (Author/CL)
... Increase your chances of living longer Fitting regular exercise into your daily schedule may seem difficult at ... fine. The key is to find the right exercise for you. It should be fun and should ...
NASA Astrophysics Data System (ADS)
Giardino, Pier Paolo; Kannike, Kristjan; Masina, Isabella; Raidal, Martti; Strumia, Alessandro
2014-05-01
We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a `universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our `universal' fit. Latest data exclude the dilaton as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M h = 124 .4 ± 1 .6 GeV.
NASA Astrophysics Data System (ADS)
Şenyurt, Süleyman; Altun, Yasin; Cevahir, Ceyda
2016-04-01
In this paper, we investigate the Smarandache curves according to Sabban frame of fixed pole curve which drawn by the unit Darboux vector of the Bertrand partner curve. Some results have been obtained. These results were expressed as the depends Bertrand curve.
Walpola, Ramesh L; Fois, Romano A; McLachlan, Andrew J; Chen, Timothy F
2015-01-01
Objective Despite the recognition that educating healthcare students in patient safety is essential, changing already full curricula can be challenging. Furthermore, institutions may lack the capacity and capability to deliver patient safety education, particularly from the start of professional practice studies. Using senior students as peer educators to deliver practice-based education can potentially overcome some of the contextual barriers in training junior students. Therefore, this study aimed to evaluate the effectiveness of a peer-led patient safety education programme for junior pharmacy students. Design A repeat cross-sectional design utilising a previously validated patient safety attitudinal survey was used to evaluate attitudes prior to, immediately after and 1 month after the delivery of a patient safety education programme. Latent growth curve (LGC) modelling was used to evaluate the change in attitudes of first-year students using second-year students as a comparator group. Setting Undergraduate university students in Sydney, Australia. Participants 175 first-year and 140 second-year students enrolled in the Bachelor of Pharmacy programme at the University of Sydney. Intervention An introductory patient safety programme was implemented into the first-year Bachelor of Pharmacy curriculum at the University of Sydney. The programme covered introductory patient safety topics including teamwork, communication skills, systems thinking and open disclosure. The programme consisted of 2 lectures, delivered by a senior academic, and a workshop delivered by trained final-year pharmacy students. Results A full LGC model was constructed including the intervention as a non-time-dependent predictor of change (χ2 (51)=164.070, root mean square error of approximation=0.084, comparative fit index=0.913, standardised root mean square=0.056). First-year students’ attitudes significantly improved as a result of the intervention, particularly in relation to
NASA Technical Reports Server (NTRS)
1993-01-01
NASA Langley recognizes the importance of healthy employees by committing itself to offering a complete fitness program. The scope of the program focuses on promoting overall health and wellness in an effort to reduce the risks of illness and disease and to increase productivity. This is accomplished through a comprehensive Health and Fitness Program offered to all NASA employees. Various aspects of the program are discussed.
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2011-07-01
Structural parameters are normally extracted from observed galaxies by fitting analytic light profiles to the observations. Obtaining accurate fits to high-resolution images is a computationally expensive task, requiring many model evaluations and convolutions with the imaging point spread function. While these algorithms contain high degrees of parallelism, current implementations do not exploit this property. With ever-growing volumes of observational data, an inability to make use of advances in computing power can act as a constraint on scientific outcomes. This is the motivation behind our work, which aims to implement the model-fitting procedure on a graphics processing unit (GPU). We begin by analysing the algorithms involved in model evaluation with respect to their suitability for modern many-core computing architectures like GPUs, finding them to be well-placed to take advantage of the high memory bandwidth offered by this hardware. Following our analysis, we briefly describe a preliminary implementation of the model fitting procedure using freely-available GPU libraries. Early results suggest a speed-up of around 10× over a CPU implementation. We discuss the opportunities such a speed-up could provide, including the ability to use more computationally expensive but better-performing fitting routines to increase the quality and robustness of fits.
Classification and properties of UV extinction curves
NASA Astrophysics Data System (ADS)
Barbaro, G.; Mazzei, P.; Morbidelli, L.; Patriarchi, P.; Perinotto, M.
2001-01-01
The catalog of Savage et al. (\\cite{ref27}) reporting colour excesses of 1415 stars from ANS photometry offers the opportunity to deeply investigate the characteristics of UV extinction curves which differ from the standard extinction of the diffuse interstellar medium. To this aim we have selected a sample of 252 curves, which have been compared with the relations derived by Cardelli et al. (\\cite{ref4}; CCM in the following) for a variety of R_V values in the range 2.4-5 and have been classified as normal if they fit at least one of the CCM curves or anomalous otherwise. We find that normal curves with small R_V are just as numerous as those with large R_V. The anomalous objects are arranged into two groups according to the strength of the bump at 0.217 mu . For a given value of c_2 this increases along the sequence: type A anomalous, normals and type B anomalous, suggesting that this sequence should correspond to an increase of the amount of small grains along the sightline. Considerations concerning the environmental characteristics indicate that the anomalous behaviour is not necessarily tied to the existence of dense gas clouds along the line of sight.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Baum, W M
1995-01-01
Behavior analysis risks intellectual isolation unless it integrates its explanations with evolutionary theory. Rule-governed behavior is an example of a topic that requires an evolutionary perspective for a full understanding. A rule may be defined as a verbal discriminative stimulus produced by the behavior of a speaker under the stimulus control of a long-term contingency between the behavior and fitness. As a discriminative stimulus, the rule strengthens listener behavior that is reinforced in the short run by socially mediated contingencies, but which also enters into the long-term contingency that enhances the listener's fitness. The long-term contingency constitutes the global context for the speaker's giving the rule. When a rule is said to be "internalized," the listener's behavior has switched from short- to long-term control. The fitness-enhancing consequences of long-term contingencies are health, resources, relationships, or reproduction. This view ties rules both to evolutionary theory and to culture. Stating a rule is a cultural practice. The practice strengthens, with short-term reinforcement, behavior that usually enhances fitness in the long run. The practice evolves because of its effect on fitness. The standard definition of a rule as a verbal statement that points to a contingency fails to distinguish between a rule and a bargain ("If you'll do X, then I'll do Y"), which signifies only a single short-term contingency that provides mutual reinforcement for speaker and listener. In contrast, the giving and following of a rule ("Dress warmly; it's cold outside") can be understood only by reference also to a contingency providing long-term enhancement of the listener's fitness or the fitness of the listener's genes. Such a perspective may change the way both behavior analysts and evolutionary biologists think about rule-governed behavior.
Mathematics Difficulties: Does One Approach Fit All?
ERIC Educational Resources Information Center
Gifford, Sue; Rockliffe, Freda
2012-01-01
This article reviews the nature of learning difficulties in mathematics and, in particular, the nature and prevalence of dyscalculia, a condition that affects the acquisition of arithmetical skills. The evidence reviewed suggests that younger children (under the age of 10) often display a combination of problems, including minor physical…
NASA Astrophysics Data System (ADS)
Mölder, S.
2016-07-01
Curved shock theory (CST) is introduced, developed and applied to relate pressure gradients, streamline curvatures, vorticity and shock curvatures in flows with planar or axial symmetry. Explicit expressions are given, in an influence coefficient format, that relate post-shock pressure gradient, streamline curvature and vorticity to pre-shock gradients and shock curvature in steady flow. The effect of pre-shock flow divergence/convergence, on vorticity generation, is related to the transverse shock curvature. A novel derivation for the post-shock vorticity is presented that includes the effects of pre-shock flow non-uniformities. CST applicability to unsteady flows is discussed.
NASA Astrophysics Data System (ADS)
Gottschlich, Carsten
2012-04-01
Gabor filters play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved Gabor filters which locally adapt their shape to the direction of flow. These curved Gabor filters enable the choice of filter parameters which increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved Gabor filters are applied to the curved ridge and valley structure of low-quality fingerprint images. First, we combine two orientation field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation and they are used for estimating the local ridge frequency. Lastly, curved Gabor filters are defined based on curved regions and they are applied for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison to state-of-the-art enhancement methods.
NASA Technical Reports Server (NTRS)
Pratt, Randy
1993-01-01
The Ames Fitness Program services 5,000 civil servants and contractors working at Ames Research Center. A 3,000 square foot fitness center, equipped with cardiovascular machines, weight training machines, and free weight equipment is on site. Thirty exercise classes are held each week at the Center. A weight loss program is offered, including individual exercise prescriptions, fitness testing, and organized monthly runs. The Fitness Center is staffed by one full-time program coordinator and 15 hours per week of part-time help. Membership is available to all employees at Ames at no charge, and there are no fees for participation in any of the program activities. Prior to using the Center, employees must obtain a physical examination and complete a membership package. Funding for the Ames Fitness Program was in jeopardy in December 1992; however, the employees circulated a petition in support of the program and collected more than 1500 signatures in only three days. Funding has been approved through October 1993.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
A Bayesian beta distribution model for estimating rainfall IDF curves in a changing climate
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Kwon, Hyun-Han; Kim, Jin-Young
2016-09-01
The estimation of intensity-duration-frequency (IDF) curves for rainfall data comprises a classical task in hydrology studies to support a variety of water resources projects, including urban drainage and the design of flood control structures. In a changing climate, however, traditional approaches based on historical records of rainfall and on the stationary assumption can be inadequate and lead to poor estimates of rainfall intensity quantiles. Climate change scenarios built on General Circulation Models offer a way to access and estimate future changes in spatial and temporal rainfall patterns at the daily scale at the utmost, which is not as fine temporal resolution as required (e.g. hours) to directly estimate IDF curves. In this paper we propose a novel methodology based on a four-parameter beta distribution to estimate IDF curves conditioned on the observed (or simulated) daily rainfall, which becomes the time-varying upper bound of the updated nonstationary beta distribution. The inference is conducted in a Bayesian framework that provides a better way to take into account the uncertainty in the model parameters when building the IDF curves. The proposed model is tested using rainfall data from four stations located in South Korea and projected climate change Representative Concentration Pathways (RCPs) scenarios 6 and 8.5 from the Met Office Hadley Centre HadGEM3-RA model. The results show that the developed model fits the historical data as good as the traditional Generalized Extreme Value (GEV) distribution but is able to produce future IDF curves that significantly differ from the historically based IDF curves. The proposed model predicts for the stations and RCPs scenarios analysed in this work an increase in the intensity of extreme rainfalls of short duration with long return periods.
Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance.
ERIC Educational Resources Information Center
Cheung, Gordon W.; Rensvold, Roger B.
2002-01-01
Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…
Aerobic Fitness for the Severely and Profoundly Mentally Retarded.
ERIC Educational Resources Information Center
Bauer, Dan
1981-01-01
The booklet discusses the aerobic fitness capacities of severely/profoundly retarded students and discusses approaches for improving their fitness. An initial section describes a method for determining the student's present fitness level on the basis of computations of height, weight, blood pressure, resting pulse, and Barach Index and Crampton…
Goodness-of-Fit Assessment of Item Response Theory Models
ERIC Educational Resources Information Center
Maydeu-Olivares, Alberto
2013-01-01
The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…
NASA Astrophysics Data System (ADS)
Levay, Z. G.
2004-12-01
A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.
NASA Astrophysics Data System (ADS)
Jeschke, Eric
2013-03-01
Ginga is a viewer for astronomical data FITS (Flexible Image Transport System) files; the viewer centers around a FITS display widget which supports zooming and panning, color and intensity mapping, a choice of several automatic cut levels algorithms and canvases for plotting scalable geometric forms. In addition to this widget, the FITS viewer provides a flexible plugin framework for extending the viewer with many different features. A fairly complete set of "standard" plugins are provided for expected features of a modern viewer: panning and zooming windows, star catalog access, cuts, star pick/fwhm, thumbnails, and others. This viewer was written by software engineers at Subaru Telescope, National Astronomical Observatory of Japan, and is in use at that facility.
Inclusive fitness from multitype branching processes.
Wild, Geoff
2011-05-01
I use multitype branching processes to study genetic models for the evolution of social behaviour, i.e. behaviours that, when acted out, affect the success of the actor's neighbours. Here, I suppose an individual bearing a mutant copy of a gene influences the reproductive success of a neighbour by altering its own competitive ability. Approximations based on assumptions about the rareness of the mutant allele and the strength of selection allow me to formulate statements concerning the probability of mutant extinction in terms of inclusive fitness. Inclusive fitness is an idea well known to biologists and can be thought of as a sum of an individual's fitness and the fitness of each of its relatives, weighted by some measure of genetic relatedness. Previous work has led to some confusion surrounding the definition of the inclusive-fitness effect of a mutant allele when individuals carrying that allele experience demographic conditions that fluctuate randomly. In this paper, I emphasise the link between inclusive fitness and the probability of mutant extinction. I recover standard results for populations of constant size, and I show that inclusive fitness can be used to determine the short-term fate of mutants in the face of stochastic demographic fluctuations. Overall, then, I provide a connection between certain inclusive-fitness-based approaches routinely applied in theoretical studies of social evolution.
The Characteristic Curves of Water
NASA Astrophysics Data System (ADS)
Neumaier, Arnold; Deiters, Ulrich K.
2016-09-01
In 1960, E. H. Brown defined a set of characteristic curves (also known as ideal curves) of pure fluids, along which some thermodynamic properties match those of an ideal gas. These curves are used for testing the extrapolation behaviour of equations of state. This work is revisited, and an elegant representation of the first-order characteristic curves as level curves of a master function is proposed. It is shown that Brown's postulate—that these curves are unique and dome-shaped in a double-logarithmic p, T representation—may fail for fluids exhibiting a density anomaly. A careful study of the Amagat curve (Joule inversion curve) generated from the IAPWS-95 reference equation of state for water reveals the existence of an additional branch.
Titration Curves: Fact and Fiction.
ERIC Educational Resources Information Center
Chamberlain, John
1997-01-01
Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…
Linking the Fits, Fitting the Links: Connecting Different Types of PO Fit to Attitudinal Outcomes
ERIC Educational Resources Information Center
Leung, Aegean; Chaturvedi, Sankalp
2011-01-01
In this paper we explore the linkages among various types of person-organization (PO) fit and their effects on employee attitudinal outcomes. We propose and test a conceptual model which links various types of fits--objective fit, perceived fit and subjective fit--in a hierarchical order of cognitive information processing and relate them to…
Generating Resources Supply Curves.
United States. Bonneville Power Administration. Division of Power Resources Planning.
1985-07-01
This report documents Pacific Northwest supply curve information for both renewable and other generating resources. Resources are characterized as ''Renewable'' and ''Other'' as defined in section 3 or the Pacific Northwest Electric Power Planning and Conservation Act. The following resources are described: renewable: (cogeneration; geothermal; hydroelectric (new); hydroelectric (efficiency improvement); solar; and wind); other (nonrenewable generation resources: coal; combustion turbines; and nuclear. Each resource has the following information documented in tabular format: (1) Technical Characteristics; (2) Costs (capital and O and M); (3) Energy Distribution by Month; and (4) Supply Forecast (energy). Combustion turbine (CT) energy supply is not forecasted because of CT's typical peaking application. Their supply is therefore unconstrained in order to facilitate analysis of their operation in the regional electrical supply system. The generic nuclear resource is considered unavailable to the region over the planning horizon.
ERIC Educational Resources Information Center
Dixon-Watmough, Rebecca; Keogh, Brenda; Naylor, Stuart
2012-01-01
For some time the Association for Science Education (ASE) has been aware that it would be useful to have some resources available to get children talking and thinking about issues related to health, sport and fitness. Some of the questions about pulse, breathing rate and so on are pretty obvious to everyone, and there is a risk of these being…
ERIC Educational Resources Information Center
Casey, Stephanie A.
2016-01-01
Statistical association between two variables is one of the fundamental statistical ideas in school curricula. Reasoning about statistical association has been deemed one of the most important cognitive activities that humans perform. Students are typically introduced to statistical association through the study of the line of best fit because it…
NASA Technical Reports Server (NTRS)
Coleman, A. E.
1981-01-01
Training manual used for preflight conditioning of NASA astronauts is written for audience with diverse backgrounds and interests. It suggests programs for various levels of fitness, including sample starter programs, safe progression schedules, and stretching exercises. Related information on equipment needs, environmental coonsiderations, and precautions can help readers design safe and effective running programs.
ERIC Educational Resources Information Center
Vail, Kathleen
1999-01-01
Children who hate gym grow into adults who associate physical activity with ridicule and humiliation. Physical education is reinventing itself, stressing enjoyable activities that continue into adulthood: aerobic dance, weight training, fitness walking, mountain biking, hiking, inline skating, karate, rock-climbing, and canoeing. Cooperative,…
ERIC Educational Resources Information Center
Maione, Mary Jane
A description is given of a program that provides preventive measures to check obesity in children and young people. The 24-week program is divided into two parts--a nutrition component and an exercise component. At the start and end of the program, tests are given to assess the participants' height, weight, body composition, fitness level, and…
ERIC Educational Resources Information Center
Donovan, Edward P.
The major objective of this module is to help students understand how water from a source such as a lake is treated to make it fit to drink. The module, consisting of five major activities and a test, is patterned after Individualized Science Instructional System (ISIS) modules. The first activity (Planning) consists of a brief introduction and a…