ERIC Educational Resources Information Center
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
ERIC Educational Resources Information Center
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
ERIC Educational Resources Information Center
Harper, Suzanne R.; Driskell, Shannon
2005-01-01
Graphic tips for using the Geometer's Sketchpad (GSP) are described. The methods to import an image into GSP, define a coordinate system, plot points and curve fit the function using a graphical calculator are demonstrated where the graphic features of GSP allow teachers to expand the use of the technology application beyond the classroom.
A curve fitting approach to estimate the extent of fermentation of indigestible carbohydrates.
Wang, H; Weening, D; Jonkers, E; Boer, T; Stellaard, F; Small, A C; Preston, T; Vonk, R J; Priebe, M G
2008-11-01
Information about the extent of carbohydrate digestion and fermentation is critical to our ability to explore the metabolic effects of carbohydrate fermentation in vivo. We used cooked (13)C-labelled barley kernels, which are rich in indigestible carbohydrates, to develop a method which makes it possible to distinguish between and to assess carbohydrate digestion and fermentation. Seventeen volunteers ingested 86 g (dry weight) of cooked naturally (13)C enriched barley kernels after an overnight fast. (13)CO(2) and H(2) in breath samples were measured every half hour for 12 h. The data of (13)CO(2) in breath before the start of the fermentation were used to fit the curve representing the digestion phase. The difference between the area under curve (AUC) of the fitted digestion curve and the AUC of the observed curve was regarded to represent the fermentation part. Different approaches were applied to determine the proportion of the (13)C-dose available for digestion and fermentation. Four hours after intake of barley, H(2)-excretion in breath started to rise. Within 12 h, 24-48% of the (13)C-dose was recovered as (13)CO(2), of which 18-19% was derived from colonic fermentation and the rest from digestion. By extrapolating the curve to baseline, it was estimated that eventually 24-25% of the total available (13)C in barley would be derived from colon fermentation. Curve fitting, using (13)CO(2)- and H(2)-breath data, is a feasible and non-invasive method to assess carbohydrate digestion and fermentation after consumption of (13)C enriched starchy food.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Tasel, Serdar F.; Hassanpour, Reza; Mumcuoglu, Erkan U.; Perkins, Guy C.; Martone, Maryann
2014-03-01
Mitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of nonmitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme. Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Cubic spline functions for curve fitting
NASA Technical Reports Server (NTRS)
Young, J. D.
1972-01-01
FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.
Cubic spline functions for curve fitting
NASA Technical Reports Server (NTRS)
Young, J. D.
1972-01-01
FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.
NASA Astrophysics Data System (ADS)
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R
2012-02-01
The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.
Fast curve fitting using neural networks
NASA Astrophysics Data System (ADS)
Bishop, C. M.; Roach, C. M.
1992-10-01
Neural networks provide a new tool for the fast solution of repetitive nonlinear curve fitting problems. In this article we introduce the concept of a neural network, and we show how such networks can be used for fitting functional forms to experimental data. The neural network algorithm is typically much faster than conventional iterative approaches. In addition, further substantial improvements in speed can be obtained by using special purpose hardware implementations of the network, thus making the technique suitable for use in fast real-time applications. The basic concepts are illustrated using a simple example from fusion research, involving the determination of spectral line parameters from measurements of B iv impurity radiation in the COMPASS-C tokamak.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Curve Fit Technique for a Smooth Curve Using Gaussian Sections.
1983-08-01
curve-fitting. Furthermore, the algorithm that does the fitting is simple enough to be used on a programmable calculator . 8 -I.F , A X i 4. Y-14 .4. - -* F.J OR;r IF 17 r*~~ , ac ~J ’a vt. . S ~ :.. *~All, a-4k .16’.- a1 1, t
Bounded Population Growth: A Curve Fitting Lesson.
ERIC Educational Resources Information Center
Mathews, John H.
1992-01-01
Presents two mathematical methods for fitting the logistic curve to population data supplied by the U.S. Census Bureau utilizing computer algebra software to carry out the computations and plot graphs. (JKK)
NASA Astrophysics Data System (ADS)
Ogren, Paul; Davis, Brian; Guy, Nick
2001-06-01
A spreadsheet approach is used to fit multilinear functions with three adjustable parameters: ƒ = a1X1(x) + a2X2(x) + a3X3(x). Results are illustrated for three familiar examples: IR analysis of gaseous DCl, the electronic/vibrational spectrum of gaseous I2, and van Deemter plots of chromatographic data. These cases are simple enough for students in upper-level physical or advanced analytical courses to write and modify their own spreadsheets. In addition to the original x, y, and sy values, 12 columns are required: three for Xn(xi) values, six for Xn(xi)Xk(xi) product sums for the curvature matrix [a], and three for yi Xn(xi) sums for (b) in the vector equation (b) = [a](a). The Excel spreadsheet MINVERSE function provides the [e] error matrix from [a]. The [e] elements are then used to determine best-fit parameter values contained in (a). These spreadsheets also use a "dimensionless" or "reduced parameter" approach in calculating parameter weights, uncertainties, and correlations. Students can later enter data sets and fit parameters into a larger spreadsheet that uses Monte Carlo techniques to produce two-dimensional scatter plots. These correspond to Dc2 ellipsoidal cross-sections or projections and provide visual depictions of parameter uncertainties and correlations. The Monte Carlo results can also be used to estimate confidence envelopes for fitting plots.
GPU accelerated curve fitting with IDL
NASA Astrophysics Data System (ADS)
Galloy, M.
2012-12-01
Curve fitting is a common mathematical calculation done in all scientific areas. The Interactive Data Language (IDL) is also widely used in this community for data analysis and visualization. We are creating a general-purpose, GPU accelerated curve fitting library for use from within IDL. We have developed GPULib, a library of routines in IDL for accelerating common scientific operations including arithmetic, FFTs, interpolation, and others. These routines are accelerated using modern GPUs using NVIDIA's CUDA architecture. We will add curve fitting routines to the GPULib library suite, making curve fitting much faster. In addition, library routines required for efficient curve fitting will also be generally useful to other users of GPULib. In particular, a GPU accelerated LAPACK implementation such as MAGMA is required for the Levenberg-Marquardt curve fitting and is commonly used in many other scientific computations. Furthermore, the ability to evaluate custom expressions at runtime necessary for specifying a function model will be useful for users in all areas.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Least-squares fitting Gompertz curve
NASA Astrophysics Data System (ADS)
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Spectral curve fitting of dielectric constants
NASA Astrophysics Data System (ADS)
Ruzi, M.; Ennis, C.; Robertson, E. G.
2017-01-01
Optical constants are important properties governing the response of a material to incident light. It follows that they are often extracted from spectra measured by absorbance, transmittance or reflectance. One convenient method to obtain optical constants is by curve fitting. Here, model curves should satisfy Kramer-Kronig relations, and preferably can be expressed in closed form or easily calculable. In this study we use dielectric constants of three different molecular ices in the infrared region to evaluate four different model curves that are generally used for fitting optical constants: (1) the classical damped harmonic oscillator, (2) Voigt line shape, (3) Fourier series, and (4) the Triangular basis. Among these, only the classical damped harmonic oscillator model strictly satisfies the Kramer-Kronig relation. If considering the trade-off between accuracy and speed, Fourier series fitting is the best option when spectral bands are broad while for narrow peaks the classical damped harmonic oscillator and the Triangular basis fitting model are the best choice.
Modeling and Fitting Exoplanet Transit Light Curves
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, G. T.
2013-01-01
We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the
Multivariate curve-fitting in GAUSS
Bunck, C.M.; Pendleton, G.W.
1988-01-01
Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Simplified curve fits for the transport properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.
1987-01-01
New, improved curve fits for the transport properties of equilibruim air have been developed. The curve fits are for viscosity and Prandtl number as functions of temperature and density, and viscosity and thermal conductivity as functions of internal energy and density. The curve fits were constructed using grabau-type transition functions to model the tranport properties of Peng and Pindroh. The resulting curve fits are sufficiently accurate and self-contained so that they can be readily incorporated into new or existing computational fluid dynamics codes. The range of validity of the new curve fits are temperatures up to 15,000 K densities from 10 to the -5 to 10 amagats (rho/rho sub o).
NASA Astrophysics Data System (ADS)
Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng
2015-02-01
Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
[Comparison among various software for LMS growth curve fitting methods].
Han, Lin; Wu, Wenhong; Wei, Qiuxia
2015-03-01
To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm.
Jang, Daeho; Chae, Geunhyoung; Shin, Sehyun
2015-09-30
The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR) curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air), the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Curve fitting for RHB Islamic Bank annual net profit
NASA Astrophysics Data System (ADS)
Nadarajan, Dineswary; Noor, Noor Fadiya Mohd
2015-05-01
The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.
Mössbauer spectral curve fitting combining fundamentally different techniques
NASA Astrophysics Data System (ADS)
Susanto, Ferry; de Souza, Paulo
2016-10-01
We propose the use of fundamentally distinctive techniques to solve the problem of curve fitting a Mössbauer spectrum. The techniques we investigated are: evolutionary algorithm, basin hopping, and hill climbing. These techniques were applied in isolation and combined to fit different shapes of Mössbauer spectra. The results indicate that complex Mössbauer spectra can be automatically curve fitted using minimum user input, and combination of these techniques achieved the best performance (lowest statistical error). The software and sample of Mössbauer spectra have been made available through a link at the reference.
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
Viscosity Coefficient Curve Fits for Ionized Gas Species Grant Palmer
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
Viscosity coefficient curve fits for neutral gas species are available from many sources. Many do a good job of reproducing experimental and computational chemistry data. The curve fits are usually expressed as a function of temperature only. This is consistent with the governing equations used to derive an expression for the neutral species viscosity coefficient. Ionized species pose a more complicated problem. They are subject to electrostatic as well as intermolecular forces. The electrostatic forces are affected by a shielding phenomenon where electrons shield the electrostatic forces of positively charged ions beyond a certain distance. The viscosity coefficient for an ionized gas species is a function of both temperature and local electron number density. Currently available curve fits for ionized gas species, such as those presented by Gupta/Yos, are a function of temperature only. What they did was to assume an electron number density. The problem is that the electron number density they assumed was unrealistically high. The purpose of this paper is two-fold. First, the proper expression for determining the viscosity coefficient of an ionized species as a function of both temperature and electron number density will be presented. Then curve fit coefficients will be developed using the more realistic assumption of an equilibrium electron number density. The results will be compared against previous curve fits and against highly accurate computational chemistry data.
Viscosity Coefficient Curve Fits for Ionized Gas Species Grant Palmer
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
Viscosity coefficient curve fits for neutral gas species are available from many sources. Many do a good job of reproducing experimental and computational chemistry data. The curve fits are usually expressed as a function of temperature only. This is consistent with the governing equations used to derive an expression for the neutral species viscosity coefficient. Ionized species pose a more complicated problem. They are subject to electrostatic as well as intermolecular forces. The electrostatic forces are affected by a shielding phenomenon where electrons shield the electrostatic forces of positively charged ions beyond a certain distance. The viscosity coefficient for an ionized gas species is a function of both temperature and local electron number density. Currently available curve fits for ionized gas species, such as those presented by Gupta/Yos, are a function of temperature only. What they did was to assume an electron number density. The problem is that the electron number density they assumed was unrealistically high. The purpose of this paper is two-fold. First, the proper expression for determining the viscosity coefficient of an ionized species as a function of both temperature and electron number density will be presented. Then curve fit coefficients will be developed using the more realistic assumption of an equilibrium electron number density. The results will be compared against previous curve fits and against highly accurate computational chemistry data.
Jurado, J M; Alcázar, A; Muñiz-Valencia, R; Ceballos-Magaña, S G; Raposo, F
2017-09-01
Since linear calibration is mostly preferred for analytical determinations, linearity in the calibration range is an important performance characteristic of any instrumental analytical method. Linearity can be proved by applying several graphical and numerical approaches. The principal graphical criteria are visual inspection of the calibration plot, the residuals plot, and the response factors plot, also called sensitivity or linearity plot. All of them must include confidence limits in order to visualize linearity deviations. In this work, the graphical representation of percent relative errors of back-calculated concentrations against the concentration of the calibration standards is proposed as linearity criterion. This graph considers a confidence interval based on the expected recovery related to the concentration level according to AOAC approach. To illustrate it, four calibration examples covering different analytical techniques and calibration situations have been studied. The proposed %RE graph was useful in all examples, helping to highlight problems related to non-linear behavior such as points with high leverage and deviations from linearity at the extremes of the calibration range. By this way, a numerical decision limit which takes into account the concentration of calibration standards can be easily included as linearity criterion in the form of %RETh=2·C(-0.11). Accordingly, this %RE parameter is accurate for the decision-making related to linearity assessment according to the fitness-for-purpose approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W^{-1} is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
[Curve-fit with hybrid logistic function for intracellular calcium transient].
Mizuno, Ju; Morita, Shigeho; Araki, Junichi; Otsuji, Mikiya; Hanaoka, Kazuo; Kurihara, Satoshi
2009-01-01
As the left ventricular (LV) pressure curve and myocardial tension curve in heart are composed of contraction and relaxation processes, we have found that hybrid logistic (HL) function calculated as the difference between two logistic functions curve-fits better the isovolumic LV pressure curve and the isometric twitch tension curve than the conventional polynomial exponential and sinusoidal functions. Increase and decrease in intracellular Ca2+ concentration regulate myocardial contraction and relaxation. Recently, we reported that intracellular Ca2+ transient (CaT) curves measured using the calcium-sensitive bioluminescent protein, aequorin, were better curve-fitted by HL function compared to the polynomial exponential function in the isolated rabbit RV and mouse LV papillary muscles. We speculate that the first logistic component curve of HL fit represents the concentration of the Ca2+ inflow into the cytoplasmic space, the concentration of Ca2+ released from sarcoplasmic reticulum (SR), the concentration of Ca2+ binding to troponin C (TnC), and the attached number of cross-bridge (CB) and their time courses, and that the second logistic component curve of HL fit represents the concentration of Ca2+ sequestered into SR, the concentration of Ca2+ removal from the cytoplasmic space, the concentration of Ca2+ released from TnC, and the detached number of CB and their time courses. This HL approach for CaT curve may provide a more useful model for investigating Ca2+ handling, Ca(2+) -TnC interaction, and CB cycling.
Nonlinear Least Squares Curve Fitting with Microsoft Excel Solver
NASA Astrophysics Data System (ADS)
Harris, Daniel C.
1998-01-01
"Solver" is a powerful tool in the Microsoft Excel spreadsheet that provides a simple means of fitting experimental data to nonlinear functions. The procedure is so easy to use and its mode of operation is so obvious that it is excellent for students to learn the underlying principle of lease squares curve fitting. This article introduces the method of fitting nonlinear functions with Solver and extends the treatment to weighted least squares and to the estimation of uncertainties in the least-squares parameters.
How Graphing Calculators Find Curves of Best Fit
ERIC Educational Resources Information Center
Shore, Mark; Shore, JoAnna; Boggs, Stacey
2004-01-01
For over a decade mathematics instructors have been using graphing calculators in courses ranging from developmental mathematics (Beginning and Intermediate Algebra) to Calculus and Statistics. One of the key functions that make them so powerful in the teaching and learning process is their ability to find curves of best fit. Instructors may use…
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
BGFit: management and automated fitting of biological growth curves
2013-01-01
Background Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. Results BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. Conclusions BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity. PMID:24067087
Non-linear curve fitting using Microsoft Excel solver.
Walsh, S; Diamond, D
1995-04-01
Solver, an analysis tool incorporated into Microsoft Excel V 5.0 for Windows, has been evaluated for solving non-linear equations. Test and experimental data sets have been processed, and the results suggest that solver can be successfully used for modelling data obtained in many analytical situations (e.g. chromatography and FIA peaks, fluorescence decays and ISE response characteristics). The relatively simple user interface, and the fact that Excel is commonly bundled free with new PCs makes it an ideal tool for those wishing to experiment with solving non-linear equations without having to purchase and learn a completely new package. The dynamic display of the iterative search process enables the user to monitor location of the optimum solution by the search algorithm. This, together with the almost universal availability of Excel, makes solver an ideal vehicle for teaching the principles of iterative non-linear curve fitting techniques. In addition, complete control of the modelling process lies with the user, who must present the raw data and enter the equation of the model, in contrast to many commercial packages bundled with instruments which perform these operations with a 'black-box' approach.
Polynomial and catenary curve fits to human dental arches.
Pepe, S H
1975-01-01
Polynomial and catenary equations were fit by least square error methods to the dentitions of seven children with "normal" occlusion. Mean and mean square error were then used to analyze accuracy of curve fits and asymmetries of arches. A lack of congruency for the "lines of occlusion" common to the maxilla and mandible suggest that the defining anatomic landmarks are inaccurate. These analyses show that the coefficients of the sixth degree polynomial equations appear to have potential as clinical indicators of arch form and, perhaps, malocclusion.
Appropriate calibration curve fitting in ligand binding assays.
Findlay, John W A; Dillard, Robert F
2007-06-29
Calibration curves for ligand binding assays are generally characterized by a nonlinear relationship between the mean response and the analyte concentration. Typically, the response exhibits a sigmoidal relationship with concentration. The currently accepted reference model for these calibration curves is the 4-parameter logistic (4-PL) model, which optimizes accuracy and precision over the maximum usable calibration range. Incorporation of weighting into the model requires additional effort but generally results in improved calibration curve performance. For calibration curves with some asymmetry, introduction of a fifth parameter (5-PL) may further improve the goodness of fit of the experimental data to the algorithm. Alternative models should be used with caution and with knowledge of the accuracy and precision performance of the model across the entire calibration range, but particularly at upper and lower analyte concentration areas, where the 4- and 5-PL algorithms generally outperform alternative models. Several assay design parameters, such as placement of calibrator concentrations across the selected range and assay layout on multiwell plates, should be considered, to enable optimal application of the 4- or 5-PL model. The fit of the experimental data to the model should be evaluated by assessment of agreement of nominal and model-predicted data for calibrators.
High-order wide-band frequency domain identification using composite curve fitting
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A method is presented for curve fitting nonparametric frequency domain data so as to identify a parametric model composed of two models in parallel, where each model has dynamics in a specified portion of the frequency band. This decomposition overcomes the problem of numerical sensitivity since lower order polynomials can be used compared to existing methods which estimate the model as a single entity. Consequently, composite curve fitting is useful for frequency domain identification of high-order systems and/or systems whose dynamics are spread over a large bandwidth. The approach can be extended to identify an arbitrary number of parallel subsystems in specified frequency regimes.
High-order wide-band frequency domain identification using composite curve fitting
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A method is presented for curve fitting nonparametric frequency domain data so as to identify a parametric model composed of two models in parallel, where each model has dynamics in a specified portion of the frequency band. This decomposition overcomes the problem of numerical sensitivity since lower order polynomials can be used compared to existing methods which estimate the model as a single entity. Consequently, composite curve fitting is useful for frequency domain identification of high-order systems and/or systems whose dynamics are spread over a large bandwidth. The approach can be extended to identify an arbitrary number of parallel subsystems in specified frequency regimes.
An automated fitting procedure and software for dose-response curves with multiphasic features
Veroli, Giovanni Y. Di; Fornari, Chiara; Goldlust, Ian; Mills, Graham; Koh, Siang Boon; Bramhall, Jo L; Richards, Frances M.; Jodrell, Duncan I.
2015-01-01
In cancer pharmacology (and many other areas), most dose-response curves are satisfactorily described by a classical Hill equation (i.e. 4 parameters logistical). Nevertheless, there are instances where the marked presence of more than one point of inflection, or the presence of combined agonist and antagonist effects, prevents straight-forward modelling of the data via a standard Hill equation. Here we propose a modified model and automated fitting procedure to describe dose-response curves with multiphasic features. The resulting general model enables interpreting each phase of the dose-response as an independent dose-dependent process. We developed an algorithm which automatically generates and ranks dose-response models with varying degrees of multiphasic features. The algorithm was implemented in new freely available Dr Fit software (sourceforge.net/projects/drfit/). We show how our approach is successful in describing dose-response curves with multiphasic features. Additionally, we analysed a large cancer cell viability screen involving 11650 dose-response curves. Based on our algorithm, we found that 28% of cases were better described by a multiphasic model than by the Hill model. We thus provide a robust approach to fit dose-response curves with various degrees of complexity, which, together with the provided software implementation, should enable a wide audience to easily process their own data. PMID:26424192
The fitting of radial velocity curves using Adaptive Simulated Annealing
NASA Astrophysics Data System (ADS)
Iglesias-Marzoa, R.; López-Morales, M.; Arévalo Morales, M. J.
2015-05-01
We present a new code for fitting radial velocities of stellar binaries and exoplanets using an Adaptive Simulated Annealing (ASA) global minimisation method. ASA belongs to the family of Monte Carlo methods and its main advantages are that it only needs evaluations of the objective function, it does not rely on derivatives, and the parameters space can be periodically redefined and rescaled for individual parameters. ASA is easily scalable since the physics is concentrated in only one function and can be modified to account for more complex models. Our ASA code minimises the χ^2 function in the multidimensional parameters space to obtain the full set of parameters (P, T_p, e, ω, γ, K_1, K_2) of the keplerian radial velocity curves which best represent the observations. As a comparison we checked our results with the published solutions for several binary stars and exoplanets with available radial velocities data. We achieve good agreement within the limits imposed by the uncertainties.
Curve fitting using logarithmic function for sea bed logging data
NASA Astrophysics Data System (ADS)
Daud, Hanita; Razali, Radzuan; Zaki, M. Ridhwan O.; Shafie, Afza
2014-10-01
The aim of this research work is to conduct curve fitting using mathematical equations that relate location of the hydrocarbon (HC) at different depth to different frequencies. COMSOL MultiPhysics software was used to generate models of the seabed logging technique which consists of air, sea water, sediment and HC layer. Seabed Logging (SBL) is a technique to find the resistive layers under seabed by transmitting low frequency of EM waves through sea water and sediment. As HC is known to have high resistivity which is about 30-500Ωm, EM waves will be guided and reflected back and detected by the receiver that are placed on the seafloor. In SBL, low frequency is used to obtain greater wavelength which allows EM waves to penetrate at longer distance and each frequency used has different skin depth. The frequencies used in this project were 0.5Hz, 0.25Hz, 0.125Hz and 0.0625Hz and the depths of the HC were varied from 1000m to 3000m with increment of 250m. Data generated from the simulations using COMSOL software was extracted for the set up with and without HC and few trend lines were developed and R2 were calculated for each equation and curve. The calculated R2 were compared between data with HC to no HC at each depth and it was found that the calculated R2 values were very well fitted for deeper HC depth. This indicates that as depth of HC is higher, it is difficult to distinguish data with and without HC presence; and perhaps a new technique can be explored.
Dose-response curve estimation: a semiparametric mixture approach.
Yuan, Ying; Yin, Guosheng
2011-12-01
In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples.
Dimensionality reduction of hyperspectral data: band selection using curve fitting
NASA Astrophysics Data System (ADS)
Pal, Mahendra K.; Porwal, Alok
2016-04-01
Hyperspectral sensors offer narrow spectral bandwidth facilitating better discrimination of various ground materials. However, high spectral resolutions of these sensors result in larger data volumes, and thus pose computation challenges. The increased computational complexity limit the use of hyperspectral data, where applications demands moderate accuracies but economy of processing and execution time. Also the high dimensionality of the feature space adversely affect classification accuracies when the number of training samples is limited - a consequence of Hughes' effect. A reduction in the number of dimensions lead to the Hughes effect, thus improving classification accuracies. Dimensionality reduction can be accomplished by: (i) feature selection, that is, selection of sub-optimal subset of the original set of features and (ii) feature extraction, that is, projection of the original feature space into a lower dimensional subspace that preserves most of Information. In this contribution, we propose a novel method of feature section by identifying and selecting the optimal bands based on spectral decorrelation using a local curve fitting technique. The technique is implemented on the Hyperion data of a study area from Western India. The results shows that the proposed algorithm is efficient and effective in preserving the useful original information for better classification with reduced data size and dimension.
NASA Astrophysics Data System (ADS)
Liu, Chun-Hung; Ng, Hoi-Tou; Ng, Philip C. W.; Tsai, Kuen-Yu; Lin, Shy-Jay; Chen, Jeng-Homg
2008-11-01
Accelerating voltage as low as 5 kV for operation of the electron-beam micro-columns as well as solving the throughput problem is being considered for high-throughput direct-write lithography for the 22-nm half-pitch node and beyond. The development of efficient proximity effect correction (PEC) techniques at low-voltage is essential to the overall technology. For realization of this approach, a thorough understanding of electron scattering in solids, as well as precise data for fitting energy intensity distribution in the resist are needed. Although electron scattering has been intensively studied, we found that the conventional gradient based curve-fitting algorithms, merit functions, and performance index (PI) of the quality of the fit were not a well posed procedure from simulation results. Therefore, we proposed a new fitting procedure adopting a direct search fitting algorithm with a novel merit function. This procedure can effectively mitigate the difficulty of conventional gradient based curve-fitting algorithm. It is less sensitive to the choice of the trial parameters. It also avoids numerical problems and reduces fitting errors. We also proposed a new PI to better describe the quality of the fit than the conventional chi-square PI. An interesting result from applying the proposed procedure showed that the expression of absorbed electron energy density in 5keV cannot be well represented by conventional multi-Gaussian models. Preliminary simulation shows that a combination of a single Gaussian and double exponential functions can better represent low-voltage electron scattering.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
NASA Astrophysics Data System (ADS)
Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei
2015-04-01
Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
The neural network approach to parton fitting
Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea
2005-10-06
We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits.
Curve fitting of aeroelastic transient response data with exponential functions
NASA Technical Reports Server (NTRS)
Bennett, R. M.; Desmarais, R. N.
1976-01-01
The extraction of frequency, damping, amplitude, and phase information from unforced transient response data is considered. These quantities are obtained from the parameters determined by fitting the digitized time-history data in a least-squares sense with complex exponential functions. The highlights of the method are described, and the results of several test cases are presented. The effects of noise are considered both by using analytical examples with random noise and by estimating the standard deviation of the parameters from maximum-likelihood theory.
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Xu, Weiyin; Chen, Kejia; Liang, Dayang; Chew, Wee
2009-04-01
A soft-modeling multivariate numerical approach that combines self-modeling curve resolution (SMCR) and mixed Lorentzian-Gaussian curve fitting was successfully implemented for the first time to elucidate spatially and spectroscopically resolved spectral information from infrared imaging data of oral mucosa cells. A novel variant form of the robust band-target entropy minimization (BTEM) SMCR technique, coined as hierarchical BTEM (hBTEM), was introduced to first cluster similar cellular infrared spectra using the unsupervised hierarchical leader-follower cluster analysis (LFCA) and subsequently apply BTEM to clustered subsets of data to reconstruct three protein secondary structure (PSS) pure component spectra-alpha-helix, beta-sheet, and ambiguous structures-that associate with spatially differentiated regions of the cell infrared image. The Pearson VII curve-fitting procedure, which approximates a mixed Lorentzian-Gaussian model for spectral band shape, was used to optimally curve fit the resolved amide I and II bands of various hBTEM reconstructed PSS pure component spectra. The optimized Pearson VII band-shape parameters and peak center positions serve as means to characterize amide bands of PSS spectra found in various cell locations and for approximating their actual amide I/II intensity ratios. The new hBTEM methodology can also be potentially applied to vibrational spectroscopic datasets with dynamic or spatial variations arising from chemical reactions, physical perturbations, pathological states, and the like.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
A flight evaluation of curved landing approaches
NASA Technical Reports Server (NTRS)
Gee, S. W.; Barber, M. R.; Mcmurtry, T. C.
1972-01-01
The development of STOL technology for application to operational short-haul aircraft is accompanied by the requirement for solving problems in many areas. One of the most obvious problems is STOL aircraft operations in the terminal area. The increased number of terminal operations needed for an economically viable STOL system as compared with the current CTOL system and the incompatibility of STOL and CTOL aircraft speeds are positive indicators of an imminent problem. The high cost of aircraft operations, noise pollution, and poor short-haul service are areas that need improvement. A potential solution to some of the operational problems lies in the capability of making curved landing approaches under both visual and instrument flight conditions.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
A Grid Algorithm for High Throughput Fitting of Dose-Response Curve Data
Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung
2010-01-01
We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program’s performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences. PMID:21331310
A grid algorithm for high throughput fitting of dose-response curve data.
Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung
2010-10-21
We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program's performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences.
XECT--a least squares curve fitting program for generalized radiotracer clearance model.
Szczesny, S; Turczyński, B
1991-01-01
The program uses the joint Monte Carlo-Simplex algorithm for fitting the generalized, non-monoexponential model of externally detected decay of radiotracer activity in the tissue. The optimal values of the model parameters (together with the rate of the blood flow) are calculated. A table and plot of the experimental points and the fitted curve are generated. The program was written in Borland's Turbo Pascal 5.5 for the IBM PC XT/AT and compatible microcomputers.
Note: curve fit models for atomic force microscopy cantilever calibration in water.
Kennedy, Scott J; Cole, Daniel G; Clark, Robert L
2011-11-01
Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%. © 2011 American Institute of Physics
An Algorithm for Obtaining Reliable Priors for Constrained-Curve Fits
Terrence Draper; Shao-Jing Dong; Ivan Horvath; Frank Lee; Nilmani Mathur; Jianbo Zhang
2004-03-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-chi-square fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. We illustrate the efficacy of the method with data from overlap fermions, on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx} 180 MeV.
Baushke, Samuel W; Stedtfeld, Robert D; Tourlousse, Dieter M; Ahmad, Farhan; Wick, Lukas M; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2012-01-01
Non-equilibrium dissociation curves (NEDCs) have the potential to identify non-specific hybridizations on high throughput, diagnostic microarrays. We report a simple method for identification of non-specific signals by using a new parameter that does not rely on comparison of perfect match and mismatch dissociations. The parameter is the ratio of specific dissociation temperature (Td-w) to theoretical melting temperature (Tm) and can be obtained by automated fitting of a four-parameter, sigmoid, empirical equation to the thousands of curves generated in a typical experiment. The curves fit perfect match NEDCs from an initial experiment with an R2 of 0.998±0.006 and root mean square of 108±91 fluorescent units. Receiver operating characteristic curve analysis showed low temperature hybridization signals (20–48 °C) to be as effective as area under the curve as primary data filters. Evaluation of three datasets that target 16S rRNA and functional genes with varying degrees of target sequence similarity showed that filtering out hybridizations with Td-w/Tm < 0.78 greatly reduced false positive results. In conclusion, Td-w/Tm successfully screened many non-specific hybridizations that could not be identified using single temperature signal intensities alone, while the empirical modeling allowed a simplified approach to the high throughput analysis of thousands of NEDCs. PMID:22537822
Thermal performance curves of Paramecium caudatum: a model selection approach.
Krenek, Sascha; Berendonk, Thomas U; Petzoldt, Thomas
2011-05-01
The ongoing climate change has motivated numerous studies investigating the temperature response of various organisms, especially that of ectotherms. To correctly describe the thermal performance of these organisms, functions are needed which sufficiently fit to the complete optimum curve. Surprisingly, model-comparisons for the temperature-dependence of population growth rates of an important ectothermic group, the protozoa, are still missing. In this study, temperature reaction norms of natural isolates of the freshwater protist Paramecium caudatum were investigated, considering nearly the entire temperature range. These reaction norms were used to estimate thermal performance curves by applying a set of commonly used model functions. An information theory approach was used to compare models and to identify the best ones for describing these data. Our results indicate that the models which can describe negative growth at the high- and low-temperature branch of an optimum curve are preferable. This is a prerequisite for accurately calculating the critical upper and lower thermal limits. While we detected a temperature optimum of around 29 °C for all investigated clonal strains, the critical thermal limits were considerably different between individual clones. Here, the tropical clone showed the narrowest thermal tolerance, with a shift of its critical thermal limits to higher temperatures.
ERIC Educational Resources Information Center
Hurtz, Gregory M.; Jones, J. Patrick
2009-01-01
Standard setting methods such as the Angoff method rely on judgments of item characteristics; item response theory empirically estimates item characteristics and displays them in item characteristic curves (ICCs). This study evaluated several indexes of rater fit to ICCs as a method for judging rater accuracy in their estimates of expected item…
Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis
ERIC Educational Resources Information Center
Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.
2010-01-01
The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo, Urbano
2000-01-01
Describes a program for computing different person-fit measures under different parametric item response models for binary items. The indexes can be computed for the Rasch model and the two- and three-parameter logistic models. The program can plot person response curves to allow the researchers to investigate the nonfitting response behavior of…
Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.
Mohammed, Goran Abdulrahman; Hou, Ming
2016-03-01
The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies.
Dong, Lixin; He, Lian; Lin, Yu; Shang, Yu; Yu, Guoqiang
2015-01-01
Near-infrared diffuse correlation spectroscopy (DCS) has recently been employed for noninvasive acquisition of blood flow information in deep tissues. Based on the established correlation diffusion equation, the light intensity autocorrelation function detected by DCS is determined by a blood flow index αDB, tissue absorption coefficient μa, reduced scattering coefficient μs’, and a coherence factor β. The present study is designed to investigate the possibility of extracting multiple parameters such as μa, μs’, β, and αDB through fitting one single autocorrelation function curve and evaluate the performance of different fitting methods. For this purpose, computer simulations, tissue-like phantom experiments and in-vivo tissue measurements were utilized. The results suggest that it is impractical to simultaneously fit αDB and μa or αDB and μs’ from one single autocorrelation function curve due to the large crosstalk between these paired parameters. However, simultaneously fitting β and αDB is feasible and generates more accurate estimation with smaller standard deviation compared to the conventional two-step fitting method (i.e., first calculating β and then fitting αDB). The outcomes from this study provide a crucial guidance for DCS data analysis. PMID:23193446
Rational quadratic Bézier curve fitting by simulated annealing technique
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Abd Majid, Ahmad; Mt Piah, Abd Rahni
2013-04-01
A metaheuristic algorithm, which is an approximation method called simulated annealing is implemented in order to have the best rational quadratic Bézier curve from a given data points. This technique is used to minimize sum squared errors in order to improve the middle control point position and the value of weight. As a result, best fitted rational quadratic Bézier curve and its mathematical function that represents all the given data points is obtained. Numerical and graphical examples are also presented to demonstrate the effectiveness and robustness of the proposed method.
STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.
Thornhill, D P; Schwerzel, E
1985-05-01
A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence.
Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers
NASA Astrophysics Data System (ADS)
Tananaev, N. I.
2015-03-01
Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
Pearson’s system of frequency curves is generated by solutions to the differential equation y = x+a y bo+b1 x+b2 x 2 where a and the b’s are...56 Generation of Equations . 56 ProgryDevelopment ............. . 60 , ase One . . . . . . . . . . . . . . 60 Phase Two . . . . . . . . . . . . . 61...function which gives the best fit to a set of data. The technique creates a system of nonlinear equations from the method of moments and uses Powell’s
A numerical method for biphasic curve fitting with a programmable calculator.
Ristanović, D; Ristanović, D; Malesević, J; Milutinović, B
1982-01-01
Elimination kinetics of bromsulphalein (BSP) after a single injection into the circulation of rats were examined by means of a four-compartment model. BSP plasma concentrations were measured colorimetrically. A program written for the Texas Instruments TI-59 programmable calculator is presented, which will calculate the fractional blood clearance of BSP using an iteration procedure. A simple method of fitting biphasic decay curves to experimental data is also proposed.
Liao, Fei; Tian, Kao-Cong; Yang, Xiao; Zhou, Qi-Xin; Zeng, Zhao-Chun; Zuo, Yu-Ping
2003-03-01
The reliability of kinetic substrate quantification by nonlinear fitting of the enzyme reaction curve to the integrated Michaelis-Menten equation was investigated by both simulation and preliminary experimentation. For simulation, product absorptivity epsilon was 3.00 mmol(-1) L cm(-1) and K(m) was 0.10 mmol L(-1), and uniform absorbance error sigma was randomly inserted into the error-free reaction curve of product absorbance A(i) versus reaction time t(i) calculated according to the integrated Michaelis-Menten equation. The experimental reaction curve of arylesterase acting on phenyl acetate was monitored by phenol absorbance at 270 nm. Maximal product absorbance A(m) was predicted by nonlinear fitting of the reaction curve to Eq. (1) with K(m) as constant. There were unique A(m) for best fitting of both the simulated and experimental reaction curves. Neither the error in reaction origin nor the variation of enzyme activity changed the background-corrected value of A(m). But the range of data under analysis, the background absorbance, and absorbance error sigma had an effect. By simulation, A(m) from 0.150 to 3.600 was predicted with reliability and linear response to substrate concentration when there was 80% consumption of substrate at sigma of 0.001. Restriction of absorbance to 0.700 enabled A(m) up to 1.800 to be predicted at sigma of 0.001. Detection limit reached A(m) of 0.090 at sigma of 0.001. By experimentation, the reproducibility was 4.6% at substrate concentration twice the K(m), and A(m) linearly responded to phenyl acetate with consistent absorptivity for phenol, and upper limit about twice the maximum of experimental absorbance. These results supported the reliability of this new kinetic method for enzymatic analysis with enhanced upper limit and precision.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Estimating of equilibrium formation temperature by curve fitting method and it's problems
Kenso Takai; Masami Hyodo; Shinji Takasugi
1994-01-20
Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
Measurement of focused ultrasonic fields based on colour edge detection and curve fitting
NASA Astrophysics Data System (ADS)
Zhu, H.; Chang, S.; Yang, P.; He, L.
2016-03-01
This paper utilizes firstly both a scanning device and an optic fiber hydrophone to establish a measurement system, and then proposes the parameter measurement of the focused transducer based on edge detection of the visualized acoustic data and curve fitting. The measurement system consists of a water tank with wedge absorber, stepper motors driver, system controller, a focused transducer, an optic fiber hydrophone and data processing software. On the basis of the visualized processing for the original scanned data, the -3 dB beam width of the focused transducer is calculated using the edge detection of the acoustic visualized image and circle fitting method by minimizing algebraic distance. Experiments on the visualized ultrasound data are implemented to verify the feasibility of the proposed method. The data obtained from the scanning device are utilized to reconstruct acoustic fields, and it is found that the -3 dB beam width of the focused transducer can be predicted accurately.
Combined use of Tikhonov deconvolution and curve fitting for spectrogram interpretation
Morawski, R.Z.; Miekina, A.; Barwicz, A.
1996-12-31
The problem of numerical correction of spectrograms is addressed. A new method of correction is developed which consists of sequential use of the Tikhonov deconvolution algorithm, for estimating the positions of spectral peaks, and a curve-fitting algorithm, for estimating their magnitudes. The metrological and numerical properties of the proposed method for spectrogram interpretation are assessed by means of spectrometry-based criteria, using synthetic and real-world spectrograms. Conclusions are drawn concerning computational complexity and accuracy of the proposed method and its metrological applicability. 22 refs., 3 figs., 1 tab.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
A novel curve fitting method for AV optimisation of biventricular pacemakers.
Dehbi, Hakim-Moulay; Jones, Siana; Sohaib, S M Afzal; Finegold, Judith A; Siggers, Jennifer H; Stegemann, Berthold; Whinnett, Zachary I; Francis, Darrel P
2015-09-01
In this study, we designed and tested a new algorithm, which we call the 'restricted parabola', to identify the optimum atrioventricular (AV) delay in patients with biventricular pacemakers. This algorithm automatically restricts the hemodynamic data used for curve fitting to the parabolic zone in order to avoid inadvertently selecting an AV optimum that is too long.We used R, a programming language and software environment for statistical computing, to create an algorithm which applies multiple different cut-offs to partition curve fitting of a dataset into a parabolic and a plateau region and then selects the best cut-off using a least squares method. In 82 patients, AV delay was adjusted and beat-to-beat systolic blood pressure (SBP) was measured non-invasively using our multiple-repetition protocol. The novel algorithm was compared to fitting a parabola across the whole dataset to identify how many patients had a plateau region, and whether a higher hemodynamic response was achieved with one method.In 9/82 patients, the restricted parabola algorithm detected that the pattern was not parabolic at longer AV delays. For these patients, the optimal AV delay predicted by the restricted parabola algorithm increased SBP by 1.36 mmHg above that predicted by the conventional parabolic algorithm (95% confidence interval: 0.65 to 2.07 mmHg, p-value = 0.002).AV optima selected using our novel restricted parabola algorithm give a greater improvement in acute hemodynamics than fitting a parabola across all tested AV delays. Such an algorithm may assist the development of automated methods for biventricular pacemaker optimisation.
Curve fitting toxicity test data: Which comes first, the dose response or the model?
Gully, J.; Baird, R.; Bottomley, J.
1995-12-31
The probit model frequently does not fit the concentration-response curve of NPDES toxicity test data and non-parametric models must be used instead. The non-parametric models, trimmed Spearman-Karber, IC{sub p}, and linear interpolation, all require a monotonic concentration-response. Any deviation from a monotonic response is smoothed to obtain the desired concentration-response characteristics. Inaccurate point estimates may result from such procedures and can contribute to imprecision in replicate tests. The following study analyzed reference toxicant and effluent data from giant kelp (Macrocystis pyrifera), purple sea urchin (Strongylocentrotus purpuratus), red abalone (Haliotis rufescens), and fathead minnow (Pimephales promelas) bioassays using commercially available curve fitting software. The purpose was to search for alternative parametric models which would reduce the use of non-parametric models for point estimate analysis of toxicity data. Two non-linear models, power and logistic dose-response, were selected as possible alternatives to the probit model based upon their toxicological plausibility and ability to model most data sets examined. Unlike non-parametric procedures, these and all parametric models can be statistically evaluated for fit and significance. The use of the power or logistic dose response models increased the percentage of parametric model fits for each protocol and toxicant combination examined. The precision of the selected non-linear models was also compared with the EPA recommended point estimation models at several effect.levels. In general, precision of the alternative models was equal to or better than the traditional methods. Finally, use of the alternative models usually produced more plausible point estimates in data sets where the effects of smoothing and non-parametric modeling made the point estimate results suspect.
NASA Astrophysics Data System (ADS)
Pineda, Juan C. B.; Hayward, Christopher C.; Springel, Volker; Mendes de Oliveira, Claudia
2017-04-01
We study the role of systematic effects in observational studies of the cusp-core problem under the minimum disc approximation using a suite of high-resolution (25-pc softening length) hydrodynamical simulations of dwarf galaxies. We mimic realistic kinematic observations and fit the mock rotation curves with two analytic models commonly used to differentiate cores from cusps in the dark matter distribution. We find that the cored pseudo-isothermal sphere (ISO) model is strongly favoured by the reduced χ ^2_ν of the fits in spite of the fact that our simulations contain cuspy Navarro-Frenk-White profiles (NFW). We show that even idealized measurements of the gas circular motions can lead to the incorrect answer if velocity underestimates induced by pressure support, with a typical size of order ∼5 km s-1 in the central kiloparsec, are neglected. Increasing the spatial resolution of the mock observations leads to more misleading results because the inner region, where the effect of pressure support is most significant, is better sampled. Fits to observations with a spatial resolution of 100 pc (2 arcsec at 10 Mpc) favour the ISO model in 78-90 per cent of the cases, while at 800-pc resolution, 41-77 per cent of the galaxies indicate the fictitious presence of a dark matter core. The coefficients of our best-fitting models agree well with those reported in observational studies; therefore, we conclude that NFW haloes cannot be ruled out reliably from this type of analysis.
Another Approach to Titration Curves: Which is the Dependent Variable?
ERIC Educational Resources Information Center
Willis, Christopher J.
1981-01-01
Describes an approach to titration curves which eliminates the choice of approximations and the algebraic complexity of accurately solving for hydrogen ion concentration. This technique regards moles of titrant as the dependent variable. (CS)
A Healthy Approach to Fitness Center Security.
ERIC Educational Resources Information Center
Sturgeon, Julie
2000-01-01
Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)
Merlos Rodrigo, Miguel Angel; Molina-López, Jorge; Jimenez Jimenez, Ana Maria; Planells Del Pozo, Elena; Adam, Pavlina; Eckschlager, Tomas; Zitka, Ondrej; Richtera, Lukas; Adam, Vojtech
2017-01-01
The translation of metallothioneins (MTs) is one of the defense strategies by which organisms protect themselves from metal-induced toxicity. MTs belong to a family of proteins comprising MT-1, MT-2, MT-3, and MT-4 classes, with multiple isoforms within each class. The main aim of this study was to determine the behavior of MT in dependence on various externally modelled environments, using electrochemistry. In our study, the mass distribution of MTs was characterized using MALDI-TOF. After that, adsorptive transfer stripping technique with differential pulse voltammetry was selected for optimization of electrochemical detection of MTs with regard to accumulation time and pH effects. Our results show that utilization of 0.5 M NaCl, pH 6.4, as the supporting electrolyte provides a highly complicated fingerprint, showing a number of non-resolved voltammograms. Hence, we further resolved the voltammograms exhibiting the broad and overlapping signals using curve fitting. The separated signals were assigned to the electrochemical responses of several MT complexes with zinc(II), cadmium(II), and copper(II), respectively. Our results show that electrochemistry could serve as a great tool for metalloproteomic applications to determine the ratio of metal ion bonds within the target protein structure, however, it provides highly complicated signals, which require further resolution using a proper statistical method, such as curve fitting. PMID:28287470
Calculations and curve fits of thermodynamic and transport properties for equilibrium air to 30000 K
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Lee, Kam-Pui; Thompson, Richard A.; Yos, Jerrold M.
1991-01-01
A self-consistent set of equilibrium air values were computed for enthalpy, total specific heat at constant pressure, compressibility factor, viscosity, total thermal conductivity, and total Prandtl number from 500 to 30,000 K over a range of 10(exp -4) atm to 10(exp 2) atm. The mixture values are calculated from the transport and thermodynamic properties of the individual species provided in a recent study by the authors. The concentrations of the individual species, required in the mixture relations, are obtained from a free energy minimization calculation procedure. Present calculations are based on an 11-species air model. For pressures less than 10(exp -2) atm and temperatures of about 15,000 K and greater, the concentrations of N(++) and O(++) become important, and consequently, they are included in the calculations determining the various properties. The computed properties are curve fitted as a function of temperature at a constant value of pressure. These curve fits reproduce the computed values within 5 percent for the entire temperature range considered here at specific pressures and provide an efficient means for computing the flowfield properties of equilibrium air, provided the elemental composition remains constant at 0.24 for oxygen and 0.76 for nitrogen by mass.
An iterative curve fitting method for accurate calculation of quality factors in resonators.
Naeli, Kianoush; Brand, Oliver
2009-04-01
A new method for eliminating the noise effect in interpreting the measured magnitude transfer characteristic of a resonator, in particular in extracting the Q-factor, is proposed and successfully tested. In this method the noise contribution to the measured power spectral density of resonator is iteratively excluded through a sequence of least-square curve fittings. The advantage of the presented method becomes more tangible when the signal to noise power ratio (SNR) is close to unity. A set of experiments for a resonant cantilever vibrating at different amplitudes has shown that when SNR is less than 10, the calculation results of conventional methods in extracting the Q-factor, i.e., the 3 dB bandwidth and single least-square curve fit, exhibit significant deviations from the actual Q-factor, while the result of the proposed iterative method remains in 5% margin of error even for a SNR of unity. This method is especially useful when no specific data is available about the measurement noise, except the assumption that the noise spectral density is constant over the measured bandwidth.
Bayesian fitting of a logistic dose-response curve with numerically derived priors.
Huson, L W; Kinnersley, N
2009-01-01
In this report we describe the Bayesian analysis of a logistic dose-response curve in a Phase I study, and we present two simple and intuitive numerical approaches to construction of prior probability distributions for the model parameters. We combine these priors with the expert prior opinion and compare the results of the analyses with those obtained from the use of alternative prior formulations.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
NASA Astrophysics Data System (ADS)
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
High-resolution fiber optic temperature sensors using nonlinear spectral curve fitting technique.
Su, Z H; Gan, J; Yu, Q K; Zhang, Q H; Liu, Z H; Bao, J M
2013-04-01
A generic new data processing method is developed to accurately calculate the absolute optical path difference of a low-finesse Fabry-Perot cavity from its broadband interference fringes. The method combines Fast Fourier Transformation with nonlinear curve fitting of the entire spectrum. Modular functions of LabVIEW are employed for fast implementation of the data processing algorithm. The advantages of this technique are demonstrated through high performance fiber optic temperature sensors consisting of an infrared superluminescent diode and an infrared spectrometer. A high resolution of 0.01 °C is achieved over a large dynamic range from room temperature to 800 °C, limited only by the silica fiber used for the sensor.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
New Horizons approach photometry of Pluto and Charon: light curves and Solar phase curves
NASA Astrophysics Data System (ADS)
Zangari, A. M.; Buie, M. W.; Buratti, B. J.; Verbiscer, A.; Howett, C.; Weaver, H. A., Jr.; Olkin, C.; Ennico Smith, K.; Young, L. A.; Stern, S. A.
2015-12-01
While the most captivating images of Pluto and Charon were shot by NASA's New Horizons probe on July 14, 2015, the spacecraft also imaged Pluto with its LOng Range Reconnaissance Imager ("LORRI") during its Annual Checkouts and Approach Phases, with campaigns in July 2013, July 2014, January 2015, March 2015, April 2015, May 2015 and June 2015. All but the first campaign provided full coverage of Pluto's 6.4 day rotation. Even though many of these images were taken when surface features on Pluto and Charon were unresolved, these data provide a unique opportunity to study Pluto over a timescale of several months. Earth-based data from an entire apparition must be combined to create a single light curve, as Pluto is never otherwise continuously available for observing due to daylight, weather and scheduling. From the spacecraft, Pluto's sub-observer latitude remained constant to within 0.05 degrees of 43.15 degrees, comparable to a week's worth of change as seen from Earth near opposition. During the July 2013 to June 2015 period, Pluto's solar phase curve increased from 11 degrees to 15 degrees, a small range, but large compared to Earth's 2 degree limit. The slope of the solar phase curve hints at properties such as surface roughness. Using PSF photometry that takes into account the ever-increasing sizes of Pluto and Charon as seen from New Horizons, as well as surface features discovered at closest approach, we present rotational light curves and solar phase curves of Pluto and Charon. We will connect these observations to previous measurements of the system from Earth.
Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C
2017-08-01
Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over
Driver performance approaching and departing curves: driving simulator study.
Bella, Francesco
2014-01-01
This article reports the outcomes of a driving simulator study on the driver performance approaching and departing curves. The study aimed to analyze driver performance in the tangent-curve-tangent transition and verify the assumption of constant speed in a curve that is commonly used in the operating speed profiles; test whether the 85th percentile of acceleration and deceleration rates experienced by drivers and the acceleration and acceleration rates obtained from the operating speeds are equivalent; find the explanatory variables associated with the 85th percentile of deceleration and acceleration rates experienced by drivers when approaching and departing horizontal curves and provide predicting models of these rates. A driving simulator study was carried out. Drivers' speeds were recorded on 26 configurations of the tangent-curve-tangent transition of 3 2-lane rural roads implemented in the CRISS (Inter-University Research Center for Road Safety) driving simulator; 856 speed profiles were analyzed. The main results were the following: The simplified assumption of the current operating speed profiles that the speed on the circular curve is constant and equal to that at midpoint can be considered admissible; The 85th percentile of deceleration and acceleration rates exhibited by each driver best reflect the actual driving performance in the tangent-curve-tangent transition; Two models that predict the expected 85th percentile of the deceleration and acceleration rates experienced by drivers were developed. The findings of the study can be used in drawing operating speed profiles that best reflect the actual driving performance and allow a more effective safety evaluation of 2-lane rural roads.
Computational tools for fitting the Hill equation to dose-response curves.
Gadagkar, Sudhindra R; Call, Gerald B
2015-01-01
Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
NASA Astrophysics Data System (ADS)
Zheng, WeiKang; Filippenko, Alexei V.
2017-03-01
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.
The family approach to assessing fit in Rasch measurement.
Smith, Richard M; Plackner, Christie
2009-01-01
There has been a renewed interest in comparing the usefulness of a variety of model and non-model based fit statistics to detect measurement disturbances. Most of the recent studies compare the results of individual statistics trying to find the single best statistic. Unfortunately, the nature of measurement disturbances is such that they are quite varied in how they manifest themselves in the data. That is to say, there is not a single fit statistic that is optimal for detecting every type of measurement disturbance. Because of this, it is necessary to use a family of fit statistics designed to detect the most important measurement disturbances when checking the fit of data to the appropriate Rasch model. The early Rasch fit statistics (Wright and Panchapakasen, 1969) were based on the Pearsonian chi square. The ability to recombine the NxL chi squares into a variety of different fit statistics, each looking at specific threats to the measurement process, is critical to this family approach to assessing fit. Calibration programs, such as WINSTEPS and FACETS, that use only one type of fit statistic to assess the fit of the data to the model, seriously underestimate the presence of measurement disturbances in the data. This is due primarily to the fact that the total fit statistics (INFIT and OUTFIT), used exclusively in these programs, are relatively insensitive to systematic threats to unidimensionality. This paper, which focuses on the Rasch model and the Pearsonian chi-square approach to assessing fit, will review the different types or measurement disturbances and their underlying causes, and identify the types of fit statistics that must be used to detect these disturbances with maximum efficiency.
Modified Sediment Rating Curve Approach for Supply-dependent Conditions
NASA Astrophysics Data System (ADS)
Wright, S. A.; Topping, D. J.; Rubin, D. M.; Melis, T. S.
2007-12-01
Reliable predictions of sediment transport and river morphology in response to driving forces, such as anthropogenic influences, are necessary for river engineering and management. Because engineering and management questions span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from sediment transport rating curves to complex three-dimensional, multiple grain-size morphodynamic models. Sediment transport rating curves assume a singular relation between sediment concentration and flow. This approach is attractive for evaluating long-term sediment budgets resulting from changes in flow regimes because it is simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (sediment concentration and flow). However, the assumption of a singular relation between sediment concentration and flow contains the following implicit assumptions: 1) that sediment transport is in equilibrium with sediment supply such that the grain-size distribution of the bed sediment is not changing, and 2) that the relation between flow and bed shear stress is constant. These assumptions present limitations that have led to the development of more complex numerical models of flow and morphodynamics. These models rely on momentum and mass conservation for water and sediment and thus have general applicability; however, this comes at a cost in terms of computations as well as the amount of data required for model set-up and testing. We present a hybrid approach that combines aspects of the standard sediment rating curve method and the more complex morphodynamic models. Our approach employs the idea of a shifting rating curve, whereby the relation between sediment concentration and flow changes as a function of the sediment budget in the reach. We have applied this alternative approach to the Colorado River below Glen Canyon Dam. This reach is
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches.
Winkler, Alexandra; Latzel, Matthias; Holube, Inga
2016-02-15
One of the main issues in hearing-aid fittings is the abnormal perception of the user's own voice as too loud, "boomy," or "hollow." This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. © The Author(s) 2016.
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
Latzel, Matthias; Holube, Inga
2016-01-01
One of the main issues in hearing-aid fittings is the abnormal perception of the user’s own voice as too loud, “boomy,” or “hollow.” This phenomenon known as the occlusion effect be reduced by large vents in the earmolds or by open-fit hearing aids. This review provides an overview of publications related to open and closed hearing-aid fittings. First, the occlusion effect and its consequences for perception while using hearing aids are described. Then, the advantages and disadvantages of open compared with closed fittings and their impact on the fitting process are addressed. The advantages include less occlusion, improved own-voice perception and sound quality, and increased localization performance. The disadvantages associated with open-fit hearing aids include reduced benefits of directional microphones and noise reduction, as well as less compression and less available gain before feedback. The final part of this review addresses the need for new approaches to combine the advantages of open and closed hearing-aid fittings. PMID:26879562
Wear, Keith A
2013-04-01
The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.
Burchardt, Malte; Träuble, Markus; Wittstock, Gunther
2009-06-15
The formalism for simulating scanning electrochemical microscopy (SECM) experiments by boundary element methods in three space coordinates has been extended to allow consideration of nonlinear boundary conditions. This is achieved by iteratively refining the boundary conditions that are encoded in a boundary condition matrix. As an example, the simulations are compared to experimental approach curves in the SECM feedback mode toward samples modified with glucose oxidase (GOx). The GOx layer was prepared by the layer-by-layer assembly of polyelectrolytes using glucose oxidase as one of the polyelectrolytes. The comparison of the simulated and experimental curves showed that under a wide range of experimentally accessible conditions approximations of the kinetics at the sample by first order models yield misleading results. The approach curves differ also qualitatively from curves calculated with first order models. As a consequence, this may lead to severe deviations when such curves are fitted to first order kinetic models. The use of linear approximations to describe the enzymatic reaction in SECM feedback experiments is justified only if the ratio of the mediator and Michaelis-Menten constant is equal to or smaller than 0.1 (deviation less than 10%).
A curve evolution approach to object-based tomographic reconstruction.
Feng, Haihua; Karl, William Clem; Castañon, David A
2003-01-01
In this paper, we develop a new approach to tomographic reconstruction problems based on geometric curve evolution techniques. We use a small set of texture coefficients to represent the object and background inhomogeneities and a contour to represent the boundary of multiple connected or unconnected objects. Instead of reconstructing pixel values on a fixed rectangular grid, we then find a reconstruction by jointly estimating these unknown contours and texture coefficients of the object and background. By designing a new "tomographic flow", the resulting problem is recast into a curve evolution problem and an efficient algorithm based on level set techniques is developed. The performance of the curve evolution method is demonstrated using examples with noisy limited-view Radon transformed data and noisy ground-penetrating radar data. The reconstruction results and computational cost are compared with those of conventional, pixel-based regularization methods. The results indicate that the curve evolution methods achieve improved shape reconstruction and have potential computation and memory advantages over conventional regularized inversion methods.
A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.
Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki
2013-01-01
In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.
ERIC Educational Resources Information Center
Winsberg, Suzanne; And Others
In most item response theory models a particular mathematical form is assumed for all item characteristic curves, e.g., a logistic function. It could be desirable, however, to estimate the shape of the item characteristic curves without prior restrictive assumptions about its mathematical form. We have developed a practical method of estimating…
A relativistic axisymmetric approach to the galactic rotation curves problem
NASA Astrophysics Data System (ADS)
Herrera-Aguilar, A.; Nucamendi, U.
2014-11-01
It is known that galactic potentials can be kinematically linked to the observed red/blue shifts of the corresponding galactic rotation curves under a minimal set of assumptions (see [1] and [2] for details): i) that emitted photons come to us from stable timelike circular geodesic orbits of stars in a static spherically symmetric gravitational field, and ii) that these photons propagate to us along null geodesics. This relation can be established without appealing at all to a concrete theory of gravitational interaction. This kinematical spherically symmetric approach to the galactic rotation curves problem can be generalized to the stationary axisymmetric realm, which is precisely the symmetry that spiral galaxies possess [3]. Here we review the relativistic results obtained in the latter work. Namely, by making use of the most general stationary axisymmetric metric, we consider stable circular orbits of stars that emit signals which travel to a distant observer along null geodesics and express the galactic red/blue shifts in terms of three arbitrary metric functions, clarifying the contribution of the rotation as well as the dragging of the gravitational field. This stationary axisymmetric approach distinguishes between red and blue shifts emitted by circularly orbiting receding and approaching stars, respectively, even when they are considered with respect to the center of a spiral galaxy, indicating the need of precise measurements in order to confront predictions with observations. We also point out the difficulties one encounters in the attempt of determining the metric functions from observations and list some potential strategies to overcome them.
Multiple curved descending approaches and the air traffic control problem
NASA Technical Reports Server (NTRS)
Hart, S. G.; Mcpherson, D.; Kreifeldt, J.; Wemple, T. E.
1977-01-01
A terminal area air traffic control simulation was designed to study ways of accommodating increased air traffic density. The concepts that were investigated assumed the availability of the microwave landing system and data link and included: (1) multiple curved descending final approaches; (2) parallel runways certified for independent and simultaneous operation under IFR conditions; (3) closer spacing between successive aircraft; and (4) a distributed management system between the air and ground. Three groups each consisting of three pilots and two air traffic controllers flew a combined total of 350 approaches. Piloted simulators were supplied with computer generated traffic situation displays and flight instruments. The controllers were supplied with a terminal area map and digital status information. Pilots and controllers also reported that the distributed management procedure was somewhat more safe and orderly than the centralized management procedure. Flying precision increased as the amount of turn required to intersect the outer mark decreased. Pilots reported that they preferred the alternative of multiple curved descending approaches with wider spacing between aircraft to closer spacing on single, straight in finals while controllers preferred the latter option. Both pilots and controllers felt that parallel runways are an acceptable way to accommodate increased traffic density safely and expeditiously.
Reilly, Cavan; Price, Phillip; Gelman, Andrew; Sandgathe, Scott A
2004-12-01
Conventional measures of model fit for indexed data (e.g., time series or spatial data) summarize errors in y, for instance by integrating (or summing) the squared difference between predicted and measured values over a range of x. We propose an approach which recognizes that errors can occur in the x-direction as well. Instead of just measuring the difference between the predictions and observations at each site (or time), we first "deform" the predictions, stretching or compressing along the x-direction or directions, so as to improve the agreement between the observations and the deformed predictions. Error is then summarized by (a) the amount of deformation in x, and (b) the remaining difference in y between the data and the deformed predictions (i.e., the residual error in y after the deformation). A parameter, lambda, controls the tradeoff between (a) and (b), so that as lambda-->infinity no deformation is allowed, whereas for lambda=0 the deformation minimizes the errors in y. In some applications, the deformation itself is of interest because it characterizes the (temporal or spatial) structure of the errors. The optimal deformation can be computed by solving a system of nonlinear partial differential equations, or, for a unidimensional index, by using a dynamic programming algorithm. We illustrate the procedure with examples from nonlinear time series and fluid dynamics.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications. PMID:28319131
ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro
2014-01-01
ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sager, D.
1973-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.
Fitness for duty: a no-nonsense approach
Dew, S.M.; Hill, A.O.
1987-01-01
In formulating the fitness-for-duty program at Houston Lighting and Power (HL and P), the project and plant staffs followed program guidelines developed by the Edison Electric Institute and considered the performance criteria for the fitness-for-duty programs developed by the Institute of Nuclear Power Operations. The staff visited utilities involved in fitness-for-duty implementation to review the problems and successes experienced by those utilities. On November 1, 1985, the nuclear group vice-president instituted the South Texas Project Fitness-for-Duty Policy to become effective on January 1, 1986. It was important to implement the program at that time, as the project moved to the final stages of construction and preparation for plant operations. The South Texas Project has made a firm commitment to the industry with our fitness-for-duty program. The no-nonsense approach to illegal drug and alcohol use enables to assure a high level of employee health, productivity, and safety in a drug- and alcohol-free environment. The cost of the fitness-for-duty program is minimal when compared to the increase in productivity and the heightened confidence in workers by the US Nuclear Regulatory Commission since implementation of this program.
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Mugge, P. H.
1974-01-01
Simplified curve fits for the thermodynamic properties of equilibrium air were devised for use in either the time-dependent or shock-capturing computational methods. For the time-dependent method, curve fits were developed for p = p(e, rho), a = a(e, rho), and T = T(e, rho). For the shock-capturing method, curve fits were developed for h = h(p, rho) and T = T(p, rho). The ranges of validity for these curves fits were for temperatures up to 25,000 K and densities from 10 to the minus 7th power to 10 to the 3d power amagats. These approximate curve fits are considered particularly useful when employed on advanced computers such as the Burroughs ILLIAC 4 or the CDC STAR.
Reliability of temperature determination from curve-fitting in multi-wavelength pyrometery
Ni, P. A.; More, R. M.; Bieniosek, F. M.
2013-08-04
Abstract This paper examines the reliability of a widely used method for temperature determination by multi-wavelength pyrometry. In recent WDM experiments with ion-beam heated metal foils, we found that the statistical quality of the fit to the measured data is not necessarily a measure of the accuracy of the inferred temperature. We found a specific example where a second-best fit leads to a more realistic temperature value. The physics issue is the wavelength-dependent emissivity of the hot surface. We discuss improvements of the multi-frequency pyrometry technique, which will give a more reliable determination of the temperature from emission data.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Simulator evaluation of manually flown curved MLS approaches. [Microwave Landing System
NASA Technical Reports Server (NTRS)
Sager, D.
1974-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than streight-in approaches; that a moderate wind does not seriously affect curve flying performance; and that there is no major performance difference between 60 and 90 deg turns.
Z-analysis: a new approach to analyze stimulation curves with intrinsic basal stimulation.
Hedlund, Peter B; von Euler, Gabriel
2005-07-01
In the study of receptor biology it is of considerable importance to describe the stimulatory properties of an agonist according to mathematically defined models. However, the presently used models are insufficient if the experimental preparation contains an intrinsic basal stimulation. We have developed a novel approach, tentatively named Z-analysis. In this approach, the concentration of endogenous agonist is calculated by extending the stimulation curve to zero effect. The concentration of endogenous agonist is then combined with the concentration of added agonist to estimate the true EC(50) value. We developed a new model, the Z-model, specifically for this purpose, but in addition, we describe how Z-analysis can be applied to the traditional E(0)-model. Models were applied to computer-generated curves with different Hill coefficients, using iterative curve fitting procedures. In addition to applying the models to ideal cases, we also used Monte Carlo-simulated data. Specific transformations were used to enable comparisons between parameters determined from these models. Both models were able to provide estimates of all eight parameters analyzed, both using ideal data and on Monte Carlo-simulated data. The Z-model was found to provide better estimates of the concentration of endogenous agonist, the EC(50) values, and the Hill value, in curves with Hill coefficient deviating from one. In conclusion, Z-analysis was suitable both to determine the concentration of endogenous agonists and to determine true EC(50) values. We found several advantages with the Z-model compared to traditional E(0)-model for analysis of stimulation curves that contain basic intrinsic stimulation.
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
Clarke, F H; Cahoon, N M
1987-08-01
A convenient procedure has been developed for the determination of partition and distribution coefficients. The method involves the potentiometric titration of the compound, first in water and then in a rapidly stirred mixture of water and octanol. An automatic titrator is used, and the data is collected and analyzed by curve fitting on a microcomputer with 64 K of memory. The method is rapid and accurate for compounds with pKa values between 4 and 10. Partition coefficients can be measured for monoprotic and diprotic acids and bases. The partition coefficients of the neutral compound and its ion(s) can be determined by varying the ratio of octanol to water. Distribution coefficients calculated over a wide range of pH values are presented graphically as "distribution profiles". It is shown that subtraction of the titration curve of solvent alone from that of the compound in the solvent offers advantages for pKa determination by curve fitting for compounds of low aqueous solubility.
Montgomery, M. H.; Winget, D. E.; Provencal, J. L.; Thompson, S. E.; Kanaan, A.; Mukadam, Anjum S.; Dalessio, J.; Shipman, H. L.; Kepler, S. O.; Koester, D.
2010-06-10
Convective driving, the mechanism originally proposed by Brickhill for pulsating white dwarf stars, has gained general acceptance as the generic linear instability mechanism in DAV and DBV white dwarfs. This physical mechanism naturally leads to a nonlinear formulation, reproducing the observed light curves of many pulsating white dwarfs. This numerical model can also provide information on the average depth of a star's convection zone and the inclination angle of its pulsation axis. In this paper, we give two sets of results of nonlinear light curve fits to data on the DBV GD 358. Our first fit is based on data gathered in 2006 by the Whole Earth Telescope; this data set was multiperiodic containing at least 12 individual modes. Our second fit utilizes data obtained in 1996, when GD 358 underwent a dramatic change in excited frequencies accompanied by a rapid increase in fractional amplitude; during this event it was essentially monoperiodic. We argue that GD 358's convection zone was much thinner in 1996 than in 2006, and we interpret this as a result of a short-lived increase in its surface temperature. In addition, we find strong evidence of oblique pulsation using two sets of evenly split triplets in the 2006 data. This marks the first time that oblique pulsation has been identified in a variable white dwarf star.
Ferreira, Abílio G T; Henrique, Douglas S; Vieira, Ricardo A M; Maeda, Emilyn M; Valotto, Altair A
2015-03-01
The objective of this study was to evaluate four mathematical models with regards to their fit to lactation curves of Holstein cows from herds raised in the southwestern region of the state of Parana, Brazil. Initially, 42,281 milk production records from 2005 to 2011 were obtained from "Associação Paranaense de Criadores de Bovinos da Raça Holandesa (APCBRH)". Data lacking dates of drying and total milk production at 305 days of lactation were excluded, resulting in a remaining 15,142 records corresponding to 2,441 Holstein cows. Data were sorted according to the parity order (ranging from one to six), and within each parity order the animals were divided into quartiles (Q25%, Q50%, Q75% and Q100%) corresponding to 305-day lactation yield. Within each parity order, for each quartile, four mathematical models were adjusted, two of which were predominantly empirical (Brody and Wood) whereas the other two presented more mechanistic characteristics (models Dijkstra and Pollott). The quality of fit was evaluated by the corrected Akaike information criterion. The Wood model showed the best fit in almost all evaluated situations and, therefore, may be considered as the most suitable model to describe, at least empirically, the lactation curves of Holstein cows raised in Southwestern Parana.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Binary 3D image interpolation algorithm based global information and adaptive curves fitting
NASA Astrophysics Data System (ADS)
Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng
2013-08-01
Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.
Learning Curves: Making Quality Online Health Information Available at a Fitness Center.
Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A
2012-01-01
Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.
The Carnegie Supernova Project: Light-curve Fitting with SNooPy
NASA Astrophysics Data System (ADS)
Burns, Christopher R.; Stritzinger, Maximilian; Phillips, M. M.; Kattner, ShiAnne; Persson, S. E.; Madore, Barry F.; Freedman, Wendy L.; Boldt, Luis; Campillay, Abdo; Contreras, Carlos; Folatelli, Gaston; Gonzalez, Sergio; Krzeminski, Wojtek; Morrell, Nidia; Salgado, Francisco; Suntzeff, Nicholas B.
2011-01-01
In providing an independent measure of the expansion history of the universe, the Carnegie Supernova Project (CSP) has observed 71 high-z Type Ia supernovae (SNe Ia) in the near-infrared bands Y and J. These can be used to construct rest-frame i-band light curves which, when compared to a low-z sample, yield distance moduli that are less sensitive to extinction and/or decline-rate corrections than in the optical. However, working with NIR observed and i-band rest-frame photometry presents unique challenges and has necessitated the development of a new set of observational tools in order to reduce and analyze both the low-z and high-z CSP sample. We present in this paper the methods used to generate uBVgriYJH light-curve templates based on a sample of 24 high-quality low-z CSP SNe. We also present two methods for determining the distances to the hosts of SN Ia events. A larger sample of 30 low-z SNe Ia in the Hubble flow is used to calibrate these methods. We then apply the method and derive distances to seven galaxies that are so nearby that their motions are not dominated by the Hubble flow.
Algorithms for l2 and l-infinity transfer function curve fitting
NASA Technical Reports Server (NTRS)
Spanos, John T.
1991-01-01
In this paper algorithms for fitting transfer functions to frequency response data are developed. Given a complex vector representing the measured frequency response of a physical system, a transfer function of specified order is determined that minimizes either of the following criteria: (1) the sum of the magnitude-squared of the frequency response errors, and (2) the magnitude of the maximum error. Both of these criteria are nonlinear in the coefficients of the unknown transfer function, and iterative minimization algorithms are proposed. A numerical example demonstrates the effectiveness of the proposed algorithms.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
U-Shaped Curves in Development: A PDP Approach
ERIC Educational Resources Information Center
Rogers, Timothy T.; Rakison, David H.; McClelland, James L.
2004-01-01
As the articles in this issue attest, U-shaped curves in development have stimulated a wide spectrum of research across disparate task domains and age groups and have provoked a variety of ideas about their origins and theoretical significance. In the authors' view, the ubiquity of the general pattern suggests that U-shaped curves can arise from…
U-Shaped Curves in Development: A PDP Approach
ERIC Educational Resources Information Center
Rogers, Timothy T.; Rakison, David H.; McClelland, James L.
2004-01-01
As the articles in this issue attest, U-shaped curves in development have stimulated a wide spectrum of research across disparate task domains and age groups and have provoked a variety of ideas about their origins and theoretical significance. In the authors' view, the ubiquity of the general pattern suggests that U-shaped curves can arise from…
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C
1996-01-01
Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
NASA Astrophysics Data System (ADS)
Navascues, M. A.; Sebastian, M. V.
Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.
Lin, Shan-Yang; Hsu, Cheng-Hung; Sheu, Ming-Thau
2010-11-02
The formation steps of inclusion complex caused by co-grinding loratadine (LOR) and hydroxypropyl-beta-cyclodextrin (HP-beta-CD) with a molar ratio of 1:1 or 1:2 were quantitatively investigated by Fourier transform infrared (FTIR) spectroscopy with curve-fitting analysis and differential scanning calorimetry (DSC). The phase solubility study and the co-evaporated solid products of the mixture of LOR and HP-beta-CD were also examined. The result indicates that the aqueous solubility of LOR was linearly increased with the increase of HP-beta-CD concentrations, in which the phase solubility diagram was classified as A(L) type. The higher apparent stability constant (2.22 x 10(4)M(-1)) reveals that the inclusion complex formed between LOR and HP-beta-CD was quite stable. The endothermic peak at 134.6 degrees C for the melting point of LOR gradually disappeared from DSC curves of LOR/HP-beta-CD coground mixtures by increasing the cogrinding time, as the disappearance of the co-evaporated solid products. The disappearance of this endothermic peak from LOR/HP-beta-CD coground mixture or the co-evaporated solid products was due to the inclusion complex formation between LOR and HP-beta-CD after cogrinding process or evaporation. Moreover, IR peaks at 1676 cm(-1) down-shifted from 1703 cm(-1) (CO stretching) and at 1235 cm(-1) upper-shifted from 1227 cm(-1) (C-O stretching) related to LOR in the inclusion complex were observed with the increase of cogrinding time, but the peak at 1646 cm(-1) due to O-H stretching of HP-beta-CD was shifted to 1640 cm(-1). The IR spectrum of 15 min-coground mixture was the same as the IR spectrum of the co-evaporated solid product, strongly indicating that the grinding process could cause the inclusion complex formation between LOR and HP-beta-CD. Three components (1700, 1676, and 1640 cm(-1)) and their compositions were certainly obtained in the 1740-1600 cm(-1) region of FTIR spectra for the LOR/HP-beta-CD coground mixture and the co
Wang, Yi-Bing; Chen, Zhi-Cheng; Wu, Wei-Hong; Kong, De-Xin; Huang, Rong-Shao; Liu, Jun-Xian; Huang, Shu-Shi
2010-04-01
The methods of fuzzy cluster and curve-fitting combined with FTIR were used to determine the origins of Herba Abri cantoniensis and Herba Abri mollis. The spectra of Herba Abri cantoniensis and Herba Abri mollis are similar, both with typical spectral shapes. The two spectra can be divided into 3 parts: the 1st is 3 500-2 800 cm(-1), containing stretching bands of -OH, N-H, and CH2 ; the 2nd is 1 800-800 cm(-1), containing stretching bands of ester carbonyl group and indican C-O(H), vibrational bands of C=C and benzene ring; The 3rd is 800-400 cm(-1), containing skeletal vibration and scissoring vibration of molecular. The recorded FTIR spectral data were processed by 9-point-smoothing, 1st derivative, SNV and fuzzy cluster analysis sequentially. The fuzzy cluster analysis was carried out by similarity or dissimilarity matrix, and two matrices are computed with Manhattan and Euclidean distance. The results indicated that the optimization used Manhattan and dissimilarity matrix, and 5 origins of Herba Abri cantoniensis were perfectly discriminated, but 2 origins of Herba Abri mollis were mixed and identified from the other 3 origins. So the characterized bands at 1 034 cm(-1) of the average 1-D spectra of Herba Abri cantoniensis and Herba Abri mollis were fitted combining 2nd derivative for further distinguishing their spectral characteristic. The results of curve-fitting showed that the bands of wild Herba Abri cantoniensis and the other origin ones were decomposed to 11 and 9 component bands respectively, but the bands of Shanglin and the other origins Herba Abri mollis were decomposed to 9 and 8 component bands dissimilarly, and the locations and normalized densities of these component bands were different. From this, together with the results of fuzzy cluster analysis, it is concluded that the combination of two methods may identify the origins of Herba Abri cantoniensis and Herba Abri mollis availably.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging
Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses.
Liu, De Li; An, Min; Johnson, Ian R; Lovett, John V
2003-01-01
Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to alpha-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf.
Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses
Liu, De Li; An, Min; Johnson, Ian R.; Lovett, John V.
2003-01-01
Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to α-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf. PMID:19330111
Feature Detection and Curve Fitting Using Fast Walsh Transforms for Shock Tracking: Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2017-01-01
Walsh functions form an orthonormal basis set consisting of square waves. Square waves make the system well suited for detecting and representing functions with discontinuities. Given a uniform distribution of 2p cells on a one-dimensional element, it has been proven that the inner product of the Walsh Root function for group p with every polynomial of degree < or = (p - 1) across the element is identically zero. It has also been proven that the magnitude and location of a discontinuous jump, as represented by a Heaviside function, are explicitly identified by its Fast Walsh Transform (FWT) coefficients. These two proofs enable an algorithm that quickly provides a Weighted Least Squares fit to distributions across the element that include a discontinuity. The detection of a discontinuity enables analytic relations to locally describe its evolution and provide increased accuracy. Time accurate examples are provided for advection, Burgers equation, and Riemann problems (diaphragm burst) in closed tubes and de Laval nozzles. New algorithms to detect up to two C0 and/or C1 discontinuities within a single element are developed for application to the Riemann problem, in which a contact discontinuity and shock wave form after the diaphragm bursts.
A robust polynomial fitting approach for contact angle measurements.
Atefi, Ehsan; Mann, J Adin; Tavana, Hossein
2013-05-14
Polynomial fitting to drop profile offers an alternative to well-established drop shape techniques for contact angle measurements from sessile drops without a need for liquid physical properties. Here, we evaluate the accuracy of contact angles resulting from fitting polynomials of various orders to drop profiles in a Cartesian coordinate system, over a wide range of contact angles. We develop a differentiator mask to automatically find a range of required number of pixels from a drop profile over which a stable contact angle is obtained. The polynomial order that results in the longest stable regime and returns the lowest standard error and the highest correlation coefficient is selected to determine drop contact angles. We find that, unlike previous reports, a single polynomial order cannot be used to accurately estimate a wide range of contact angles and that a larger order polynomial is needed for drops with larger contact angles. Our method returns contact angles with an accuracy of <0.4° for solid-liquid systems with θ < ~60°. This compares well with the axisymmetric drop shape analysis-profile (ADSA-P) methodology results. Above about 60°, we observe significant deviations from ADSA-P results, most likely because a polynomial cannot trace the profile of drops with close-to-vertical and vertical segments. To overcome this limitation, we implement a new polynomial fitting scheme by transforming drop profiles into polar coordinate system. This eliminates the well-known problem with high curvature drops and enables estimating contact angles in a wide range with a fourth-order polynomial. We show that this approach returns dynamic contact angles with less than 0.7° error as compared to ADSA-P, for the solid-liquid systems tested. This new approach is a powerful alternative to drop shape techniques for estimating contact angles of drops regardless of drop symmetry and without a need for liquid properties.
Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila
2005-10-01
Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.
NASA Astrophysics Data System (ADS)
Thomas, Christian L.
2006-06-01
Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution
Xiong, Jiang; Wang, Shen Ming; Zhou, Wei; Wu, Jan Guo
2008-07-01
The maximal strain, stress, elastic modulus, and stress-strain curve fitting of abdominal aortic aneurysms (AAA) and bidirectional nonaneurysmal abdominal aorta (NAA) were measured and analyzed to obtain the ultimate mechanical properties, the more approximate stress-strain curve-fitting, and the elastic modulus formula of AAA and NAA. Fourteen human AAA samples were harvested from patients undergoing elective aneurysm repair. Twelve NAA samples comprised of six longitudinal-circumferential pairs of NAA from six cadaveric organ donors were used as controls. Samples were mounted on a tensile-testing machine and force was applied until breakage occurred. The maximal strain, stress, and elastic modulus were calculated and a stress-strain curve was plotted for each sample. Exponential and second-order polynomial curves were used to fit the stress-strain curve, and the means were estimated by comparing the R2 (coefficient of determination that represents the strength of a curve fitting). Coefficients of elastic modulus were calculated and analyzed, and the incremental tendency of each modulus was evaluated by comparing the difference of coefficients. There was no significant difference in maximal stress among AAA, circumferential aortic aneurysms (CAA), and longitudinal aortic aneurysms (LAA). However, AAA maximal strain was significantly less (P < .01) than that of bidirectional NAA. AAA maximal elastic modulus was significantly greater than that of CAA and LAA (P < .01 and .05, respectively). R2 of AAA for second-order polynomial curve was significantly greater (P < .05) than that for the exponential curve. For the elastic modulus formula from the second-order polynomial curve, E = 2ax + b, the average value of a for the AAA was significantly greater (P < .01) than that for the bidirectional NAA, but there was no significant difference (P > .05) among the three groups for the average value of b. Tensile test measurements can successfully analyze ultimate mechanical
Approaches to measure the fitness of Burkholderia cepacia complex isolates.
Pope, C F; Gillespie, S H; Moore, J E; McHugh, T D
2010-06-01
Members of the Burkholderia cepacia complex (Bcc) are highly resistant to many antibacterial agents and infection can be difficult to eradicate. A coordinated approach has been used to measure the fitness of Bcc bacteria isolated from cystic fibrosis (CF) patients with chronic Bcc infection using methods relevant to Bcc growth and survival conditions. Significant differences in growth rate were observed among isolates; slower growth rates were associated with isolates that exhibited higher MICs and were resistant to more antimicrobial classes. The nucleotide sequences of the quinolone resistance-determining region of gyrA in the isolates were determined and the ciprofloxacin MIC correlated with amino acid substitutions at codons 83 and 87. Biologically relevant methods for fitness measurement were developed and could be applied to investigate larger numbers of clinical isolates. These methods were determination of planktonic growth rate, biofilm formation, survival in water and survival during drying. We also describe a method to determine mutation rate in Bcc bacteria. Unlike in Pseudomonas aeruginosa where hypermutability has been detected in strains isolated from CF patients, we were unable to demonstrate hypermutability in this panel of Burkholderia cenocepacia and Burkholderia multivorans isolates.
Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.
ERIC Educational Resources Information Center
Buchanan, Trey
Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…
Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.
ERIC Educational Resources Information Center
Buchanan, Trey
Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
NASA Astrophysics Data System (ADS)
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
Beyond the Ubiquitous Relapse Curve: A Data-Informed Approach
Zywiak, William H.; Kenna, George A.; Westerberg, Verner S.
2011-01-01
Relapse to alcohol and other substances has generally been described by curves that resemble one another. However, these curves have been generated from the time to first use after a period of abstinence without regard to the movement of individuals into and out of drug use. Instead of measuring continuous abstinence, we considered post-treatment functioning as a more complicated phenomenon, describing how people move in and out of drinking states on a monthly basis over the course of a year. When we looked at time to first drink we observed the ubiquitous relapse curve. When we classified clients (N = 550) according to drinking state however, they frequently moved from one state to another with both abstinent and very heavy drinking states as being rather stable, and light or moderate drinking and heavy drinking being unstable. We found that clients with a family history of alcoholism were less likely to experience these unstable states. When we examined the distribution of cases crossed by the number of times clients switched states we found that a power function explained 83% of that relationship. Some of the remainder of the variance seems to be explained by the stable states of very heavy drinking and abstinence acting as attractors. PMID:21556282
NASA Astrophysics Data System (ADS)
Liang, Fusheng; Zhao, Ji; Ji, Shijun; Fan, Cheng; Zhang, Bing
2017-06-01
The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available.
A Comprehensive Approach for Assessing Person Fit with Test-Retest Data
ERIC Educational Resources Information Center
Ferrando, Pere J.
2014-01-01
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
A Comprehensive Approach for Assessing Person Fit with Test-Retest Data
ERIC Educational Resources Information Center
Ferrando, Pere J.
2014-01-01
Item response theory (IRT) models allow model-data fit to be assessed at the individual level by using person-fit indices. This assessment is also feasible when IRT is used to model test-retest data. However, person-fit developments for this type of modeling are virtually nonexistent. This article proposes a general person-fit approach for…
A new approach to the analysis of Mira light curves
NASA Technical Reports Server (NTRS)
Mennessier, M. O.; Barthes, D.; Mattei, J. A.
1990-01-01
Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules.
A new approach to the analysis of Mira light curves
NASA Technical Reports Server (NTRS)
Mennessier, M. O.; Barthes, D.; Mattei, J. A.
1990-01-01
Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules.
Fitting of m*/m with Divergence Curve for He3 Fluid Monolayer using Hole-driven Mott Transition
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tak
2012-02-01
The electron-electron interaction for strongly correlated systems plays an important role in formation of an energy gap in solid. The breakdown of the energy gap is called the Mott metal-insulator transition (MIT) which is different from the Peierls MIT induced by breakdown of electron-phonon interaction generated by change of a periodic lattice. It has been known that the correlated systems are inhomogeneous. In particular, He3 fluid monolayer [1] and La1-xSrxTiO3 [2] are representative strongly correlated systems. Their doping dependence of the effective mass of carrier in metal, m*/m, indicating the magnitude of correlation (Coulomb interaction) between electrons has a divergence behavior. However, the fitting remains unfitted to be explained by a Mott-transition theory with divergence. In the case of He3 regarded as the Fermi system with one positive charge (2 electrons + 3 protons), the interaction between He3 atoms is regarded as the correlation in strongly correlated system. In this presentation, we introduce a Hole-driven MIT with a divergence near the Mott transition [3] and fit the m*/m curve in He3 [1] and La1-xSrxTiO3 systems with the Hole-driven MIT with m*/m=1/(1-ρ^4) where ρ is band filling. Moreover, it is shown that the physical meaning of the effective mass with the divergence is percolation in which m*/m increases with increasing doping concentration, and that the magnitude of m*/m is constant.[4pt] [1] Phys. Rev. Lett. 90, 115301 (2003).[0pt] [2] Phys. Rev. Lett. 70, 2126 (1993).[0pt] [3] Physica C 341-348, 259 (2000); Physica C 460-462, 1076 (2007).
NASA Astrophysics Data System (ADS)
Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.
2017-04-01
We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.
Buttchereit, N; Stamer, E; Junge, W; Thaller, G
2010-04-01
Selection for milk yield increases the metabolic load of dairy cows. The fat:protein ratio of milk (FPR) could serve as a measure of the energy balance status and might be used as a selection criterion to improve metabolic stability. The fit of different fixed and random regression models describing FPR and daily energy balance was tested to establish appropriate models for further genetic analyses. In addition, the relationship between both traits was evaluated for the best fitting model. Data were collected on a dairy research farm running a bull dam performance test. Energy balance was calculated using information on milk yield, feed intake per day, and live weight. Weekly FPR measurements were available. Three data sets were created containing records of 577 primiparous cows with observations from lactation d 11 to 180 as well as records of 613 primiparous cows and 96 multiparous cows with observations from lactation d 11 to 305. Five well-established parametric functions of days in milk (Ali and Schaeffer, Guo and Swalve, Wilmink, Legendre polynomials of third and fourth degree) were chosen for modeling the lactation curves. Evaluation of goodness of fit was based on the corrected Akaike information criterion, the Bayesian information criterion, correlation between the real observation and the estimated value, and on inspection of the residuals plotted against days in milk. The best model was chosen for estimation of correlations between both traits at different lactation stages. Random regression models were superior compared with the fixed regression models. In general, the Ali and Schaeffer function appeared most suitable for modeling both the fixed and the random regression part of the mixed model. The FPR is greatest in the initial lactation period when energy deficit is most pronounced. Energy balance stabilizes at the same point as the decrease in FPR stops. The inverted patterns indicate a causal relationship between the 2 traits. A common pattern was
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
A computational approach to the twin paradox in curved spacetime
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng
2016-09-01
Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2002-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Larion, Mioara; Miller, Brian G
2010-10-19
Human pancreatic glucokinase is a monomeric enzyme that displays kinetic cooperativity, a feature that facilitates enzyme-mediated regulation of blood glucose levels in the body. Two theoretical models have been proposed to describe the non-Michaelis-Menten behavior of human glucokinase. The mnemonic mechanism postulates the existence of one thermodynamically favored enzyme conformation in the absence of glucose, whereas the ligand-induced slow transition model (LIST) requires a preexisting equilibrium between two enzyme species that interconvert with a rate constant slower than turnover. To investigate whether either of these mechanisms is sufficient to describe glucokinase cooperativity, a transient-state kinetic analysis of glucose binding to the enzyme was undertaken. A complex, time-dependent change in enzyme intrinsic fluorescence was observed upon exposure to glucose, which is best described by an analytical solution comprised of the sum of four exponential terms. Transient-state glucose binding experiments conducted in the presence of increasing glycerol concentrations demonstrate that three of the observed rate constants decrease with increasing viscosity. Global fit analyses of experimental glucose binding curves are consistent with a kinetic model that is an extension of the LIST mechanism with a total of four glucose-bound binary complexes. The kinetic model presented herein suggests that glucokinase samples multiple conformations in the absence of ligand and that this conformational heterogeneity persists even after the enzyme associates with glucose.
Predicting Future Trends in Adult Fitness Using the Delphi Approach.
ERIC Educational Resources Information Center
Murray, William F.; Jarman, Boyd O.
1987-01-01
This study examines the future of adult fitness from the perspective of experts. The Delphi Technique was used as a measurement tool. Findings revealed that the experts most relied on increased awareness of health and fitness among the elderly as a significant predictor variable. (Author/CB)
Integrated healthcare networks' performance: a growth curve modeling approach.
Wan, Thomas T H; Wang, Bill B L
2003-05-01
This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.
NASA Astrophysics Data System (ADS)
Włosińska, M.; Niedzielski, T.; Priede, I. G.; Migoń, P.
2012-04-01
The poster reports ongoing investigations into hypsometric curve modelling and its implications for sea level change. Numerous large-scale geodynamic phenomena, including global tectonics and the related sea level changes, are well described by a hypsometric curve that quantifies how the area of sea floor varies along with depth. Although the notion of hypsometric curve is rather simple, it is difficult to provide a reasonable theoretical model that fits an empirical curve. An analytical equation for a hypsometric curve is well known, but its goodness-of-fit to an empirical one is far from perfect. Such a limited accuracy may result from either not entirely adequate theoretical assumptions and concepts of a theoretical hypsometric curve or rather poorly modelled global bathymetry. Recent progress in obtaining accurate data on sea floor topography is due to subsea surveying and remote sensing. There are bathymetric datasets, including Global Bathymetric Charts of the Oceans (GEBCO), that provide a global framework for hypsometric curve computation. The recent GEBCO bathymetry - a gridded dataset that consists a sea floor topography raster revealing a global coverage with a spatial resolution of 30 arc-seconds - can be analysed to verify a depth-area relationship and to re-evaluate classical models for sea level change in geological time. Processing of the geospatial data is feasible on the basis of modern powerful tools provided by Geographic Information System (GIS) and automated with Python, the programming language that allows the user to utilise the GIS geoprocessor.
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test.
MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.
ERIC Educational Resources Information Center
Arabie, Phipps
1980-01-01
A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)
MAPCLUS: A Mathematical Programming Approach to Fitting the ADCLUS Model.
ERIC Educational Resources Information Center
Arabie, Phipps
1980-01-01
A new computing algorithm, MAPCLUS (Mathematical Programming Clustering), for fitting the Shephard-Arabie ADCLUS (Additive Clustering) model is presented. Details and benefits of the algorithm are discussed. (Author/JKS)
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Neitzel, Anne-Christin; Stamer, Eckhard; Junge, Wolfgang; Thaller, Georg
2015-05-01
Laboratory somatic cell count (LSCC) records are usually recorded monthly and provide an important information source for breeding and herd management. Daily milk viscosity detection in composite milking (expressed as drain time) with an automated on-line California Mastitis Test (CMT) could serve immediately as an early predictor of udder diseases and might be used as a selection criterion to improve udder health. The aim of the present study was to clarify the relationship between the well-established LSCS and the new trait,'drain time', and to estimate their correlations to important production traits. Data were recorded on the dairy research farm Karkendamm in Germany. Viscosity sensors were installed on every fourth milking stall in the rotary parlour to measure daily drain time records. Weekly LSCC and milk composition data were available. Two data sets were created containing records of 187,692 milkings from 320 cows (D1) and 25,887 drain time records from 311 cows (D2). Different fixed effect models, describing the log-transformed drain time (logDT), were fitted to achieve applicable models for further analysis. Lactation curves were modelled with standard parametric functions (Ali and Schaeffer, Legendre polynomials of second and third degree) of days in milk (DIM). Random regression models were further applied to estimate the correlations between cow effects between logDT and LSCS with further important production traits. LogDT and LSCS were strongest correlated in mid-lactation (r = 0.78). Correlations between logDT and production traits were low to medium. Highest correlations were reached in late lactation between logDT and milk yield (r = -0.31), between logDT and protein content (r = 0.30) and in early as well as in late lactation between logDT and lactose content (r = -0.28). The results of the present study show that the drain time could be used as a new trait for daily mastitis control.
Guo, Lianping; Tian, Shulin; Jiang, Jun
2015-03-01
This paper proposes an algorithm to estimate the channel mismatches in time-interleaved analog-to-digital converter (TIADC) based on fractional delay (FD) and sine curve fitting. Choose one channel as the reference channel and apply FD to the output samples of reference channel to obtain the ideal samples of non-reference channels with no mismatches. Based on least square method, the sine curves are adopted to fit the ideal and the actual samples of non-reference channels, and then the mismatch parameters can be estimated by comparing the ideal sine curves and the actual ones. The principle of this algorithm is simple and easily understood. Moreover, its implementation needs no extra circuits, lowering the hardware cost. Simulation results show that the estimation accuracy of this algorithm can be controlled within 2%. Finally, the practicability of this algorithm is verified by the measurement results of channel mismatch errors of a two-channel TIADC prototype.
Effectiveness of a teleaudiology approach to hearing aid fitting.
Blamey, Peter J; Blamey, Jeremy K; Saunders, Elaine
2015-12-01
This research was conducted to evaluate the efficacy of an online speech perception test (SPT) for the measurement of hearing and hearing aid fitting in comparison with conventional methods. Phase 1 was performed with 88 people to evaluate the SPT for the detection of significant hearing loss. The SPT had high sensitivity (94%) and high selectivity (98%). In Phase 2, phonetic stimulus-response matrices derived from the SPT results for 408 people were used to calculate "Infograms™." At every frequency, there was a highly significant correlation (p < 0.001) between hearing thresholds derived from the Infogram and conventional audiograms. In Phase 3, initial hearing aid fittings were derived from conventional audiograms and Infograms for two groups of hearing impaired people. Unaided and aided SPTs were used to measure the perceptual benefit of the aids for the two groups. The mean increases between unaided and aided SPT scores were 19.6%, and 22.2% (n = 517, 484; t = 2.2; p < 0.05) for hearing aids fitted using conventional audiograms and Infograms respectively. The research provided evidence that the SPT is a highly effective tool for the detection and measurement of hearing loss and hearing aid fitting. Use of the SPT reduces the costs and increases the effectiveness of hearing aid fitting, thereby enabling a sustainable teleaudiology business model. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2016-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this article we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from two popular econometric approaches:…
Büchi, Dominik L; Ebler, Sabine; Hämmerle, Christoph H F; Sailer, Irena
2014-01-01
To test whether or not different types of CAD/CAM systems, processing zirconia in the densely and in the pre-sintered stage, lead to differences in the accuracy of 4-unit anterior fixed dental prosthesis (FDP) frameworks, and to evaluate the efficiency. 40 curved anterior 4-unit FDP frameworks were manufactured with four different CAD/CAM systems: DCS Precident (DCS) (control group), Cercon (DeguDent) (test group 1), Cerec InLab (Sirona) (test group 2), Kavo Everest (Kavo) (test group 3). The DCS System was chosen as the control group because the zirconia frameworks are processed in its densely sintered stage and there is no shrinkage of the zirconia during the manufacturing process. The initial fit of the frameworks was checked and adjusted to a subjectively similar level of accuracy by one dental technician, and the time taken for this was recorded. After cementation, the frameworks were embedded into resin and the abutment teeth were cut in mesiodistal and orobuccal directions in four specimens. The thickness of the cement gap was measured at 50× (internal adaptation) and 200× (marginal adaptation) magnification. The measurement of the accuracy was performed at four sites. Site 1: marginal adaptation, the marginal opening at the point of closest perpendicular approximation between the die and framework margin. Site 2: Internal adaptation at the chamfer. Site 3: Internal adaptation at the axial wall. Site 4: Internal adaptation in the occlusal area. The data were analyzed descriptively using the ANOVA and Bonferroni/ Dunn tests. The mean marginal adaptation (site 1) of the control group was 107 ± 26 μm; test group 1, 140 ± 26 μm; test group 2, 104 ± 40 μm; and test group 3, 95 ± 31 μm. Test group 1 showed a tendency to exhibit larger marginal gaps than the other groups, however, this difference was only significant when test groups 1 and 3 were compared (P = .0022; Bonferroni/Dunn test). Significantly more time was needed for the adjustment of the
Sakurai-Yageta, Mika; Maruyama, Tomoko; Suzuki, Takashi; Ichikawa, Kazuhisa; Murakami, Yoshinori
2015-01-01
Protein components of cell adhesion machinery show continuous renewal even in the static state of epithelial cells and participate in the formation and maintenance of normal epithelial architecture and tumor suppression. CADM1 is a tumor suppressor belonging to the immunoglobulin superfamily of cell adhesion molecule and forms a cell adhesion complex with an actin-binding protein, 4.1B, and a scaffold protein, MPP3, in the cytoplasm. Here, we investigate dynamic regulation of the CADM1-4.1B-MPP3 complex in mature cell adhesion by fluorescence recovery after photobleaching (FRAP) analysis. Traditional FRAP analysis were performed for relatively short period of around 10 min. Here, thanks to recent advances in the sensitive laser detector systems, we examine FRAP of CADM1 complex for longer period of 60 min and analyze the recovery with exponential curve-fitting to distinguish the fractions with different diffusion constants. This approach reveals that the fluorescence recovery of CADM1 is fitted to a single exponential function with a time constant (τ) of approximately 16 min, whereas 4.1B and MPP3 are fitted to a double exponential function with two τs of approximately 40-60 sec and 16 min. The longer τ is similar to that of CADM1, suggesting that 4.1B and MPP3 have two distinct fractions, one forming a complex with CADM1 and the other present as a free pool. Fluorescence loss in photobleaching analysis supports the presence of a free pool of these proteins near the plasma membrane. Furthermore, double exponential fitting makes it possible to estimate the ratio of 4.1B and MPP3 present as a free pool and as a complex with CADM1 as approximately 3:2 and 3:1, respectively. Our analyses reveal a central role of CADM1 in stabilizing the complex with 4.1B and MPP3 and provide insight in the dynamics of adhesion complex formation.
Optimizing a nonlinear mathematical approach for the computerized analysis of mood curves.
Möller, H J; Leitner, M
1987-01-01
A nonlinear mathematical model for computerized description of mood curves is presented. This model reaches a high goodness of fit to the real data. It seems superior to two other models recently proposed. Using this model in a computer program for describing the mood data of a large sample of inpatients, significant and clinically meaningful group differences between the mood curves of schizophrenic, endogenous-depressive, and neurotic-depressive inpatients could be demonstrated. The application of the methodology might be helpful, e.g. in the field of evaluative research.
NASA Technical Reports Server (NTRS)
Knox, Charles E.
1993-01-01
A piloted simulation study was conducted to examine the requirements for using electromechanical flight instrumentation to provide situation information and flight guidance for manually controlled flight along curved precision approach paths to a landing. Six pilots were used as test subjects. The data from these tests indicated that flight director guidance is required for the manually controlled flight of a jet transport airplane on curved approach paths. Acceptable path tracking performance was attained with each of the three situation information algorithms tested. Approach paths with both multiple sequential turns and short final path segments were evaluated. Pilot comments indicated that all the approach paths tested could be used in normal airline operations.
The conical fit approach to modeling ionospheric total electron content
NASA Technical Reports Server (NTRS)
Sparks, L.; Komjathy, A.; Mannucci, A. J.
2002-01-01
The Global Positioning System (GPS) can be used to measure the integrated electron density along raypaths between satellites and receivers. Such measurements may, in turn, be used to construct regional and global maps of the ionospheric total electron content (TEC). Maps are generated by fitting measurements to an assumed ionospheric model.
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A
Reinhardt, Christopher Peter; Germain, Michael J.; Groman, Ernest V.; Mulhern, Jeffrey G.; Kumar, Rajesh; Vaccaro, Dennis E.
2008-01-01
This is the first description of functional immunoassay technology (FIT), which as a diagnostic tool has broad application across the whole spectrum of physiological measurements. In this paper, FIT is used to measure the renal clearance of an ultra low-dose administration of a clinically available contrast reagent for the purpose of obtaining an accurate glomerular filtration rate (GFR) measurement. Biomarker-based GFR estimates offer convenience, but are not accurate and are often misleading. FIT overcomes previous analytic barriers associated with obtaining an accurate GFR measurement. We present the performance characteristics of this diagnostic test and demonstrate the method by directly comparing GFR values obtained by FIT to those obtained by an FDA approved nuclear test in 20 adults. Two subjects were healthy volunteers and the remaining 18 subjects had diagnosed chronic kidney disease, with 12 being kidney transplant recipients. Measured GFR values were calculated by the classic UV/P method and by the blood clearance method. GFR obtained by FIT and the nuclear test correlated closely over a wide range of GFR values (10.9–102.1 ml·min−1·1.73 m−2). The study demonstrates that FIT-GFR provides an accurate and reproducible measurement. This nonradioactive, immunoassay-based approach offers many advantages, chiefly that most laboratories already have the equipment and trained personnel necessary to run an ELISA, and therefore this important diagnostic measurement can more readily be obtained. The FIT-GFR test can be used throughout the pharmaceutical development pipeline: preclinical and clinical trials. PMID:18768587
Reinhardt, Christopher Peter; Germain, Michael J; Groman, Ernest V; Mulhern, Jeffrey G; Kumar, Rajesh; Vaccaro, Dennis E
2008-11-01
This is the first description of functional immunoassay technology (FIT), which as a diagnostic tool has broad application across the whole spectrum of physiological measurements. In this paper, FIT is used to measure the renal clearance of an ultra low-dose administration of a clinically available contrast reagent for the purpose of obtaining an accurate glomerular filtration rate (GFR) measurement. Biomarker-based GFR estimates offer convenience, but are not accurate and are often misleading. FIT overcomes previous analytic barriers associated with obtaining an accurate GFR measurement. We present the performance characteristics of this diagnostic test and demonstrate the method by directly comparing GFR values obtained by FIT to those obtained by an FDA approved nuclear test in 20 adults. Two subjects were healthy volunteers and the remaining 18 subjects had diagnosed chronic kidney disease, with 12 being kidney transplant recipients. Measured GFR values were calculated by the classic UV/P method and by the blood clearance method. GFR obtained by FIT and the nuclear test correlated closely over a wide range of GFR values (10.9-102.1 ml.min(-1).1.73 m(-2)). The study demonstrates that FIT-GFR provides an accurate and reproducible measurement. This nonradioactive, immunoassay-based approach offers many advantages, chiefly that most laboratories already have the equipment and trained personnel necessary to run an ELISA, and therefore this important diagnostic measurement can more readily be obtained. The FIT-GFR test can be used throughout the pharmaceutical development pipeline: preclinical and clinical trials.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
A new approach for magnetic curves in 3D Riemannian manifolds
Bozkurt, Zehra Gök, Ismail Yaylı, Yusuf Ekmekci, F. Nejat
2014-05-15
A magnetic field is defined by the property that its divergence is zero in a three-dimensional oriented Riemannian manifold. Each magnetic field generates a magnetic flow whose trajectories are curves called as magnetic curves. In this paper, we give a new variational approach to study the magnetic flow associated with the Killing magnetic field in a three-dimensional oriented Riemann manifold, (M{sup 3}, g). And then, we investigate the trajectories of the magnetic fields called as N-magnetic and B-magnetic curves.
Estimating yield curve the Svensson extended model using L-BFGS-B method approach
NASA Astrophysics Data System (ADS)
Muslim, Rosadi, Dedi; Gunardi, Abdurakhman
2015-02-01
Yield curve is curves that describe the magnitude of the yield against maturity. To describe this curve, we use the Svensson model. One extension of this model is Rezende-Ferreira. Expansion undertaken by Rezende-Ferreira has weaknesses that there are several parameters have the same value. These values form Nelson-Siegel model. In this paper, we propose expansion of Svensson model. These models are non-linear model, so it is more difficult to estimate. To overcome this problem, we propose Nonlinear Least Square by L-BFGS-B method approach.
ERIC Educational Resources Information Center
Rousseau, Ronald
1994-01-01
Discussion of informetric distributions shows that generalized Leimkuhler functions give proper fits to a large variety of Bradford curves, including those exhibiting a Groos droop or a rising tail. The Kolmogorov-Smirnov test is used to test goodness of fit, and least-square fits are compared with Egghe's method. (Contains 53 references.) (LRW)
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Mac, Amy; Rhodes, Gillian; Webster, Michael A.
2015-01-01
Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects. PMID:27551353
O'Neil, Sean F; Mac, Amy; Rhodes, Gillian; Webster, Michael A
2015-12-01
Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects.
Physical fitness: An operator's approach to coping with shift work
Hanks, D.H.
1989-01-01
There is a strong correlation between a shift worker's ability to remain alert and the physical fitness of the individual. Alertness is a key element of a nuclear plant operator's ability to effectively monitor and control plant status. The constant changes in one's metabolism caused by the rotation of work (and sleep) hours can be devastating to his or her health. Many workers with longevity in the field, however, have found it beneficial to maintain some sort of workout or sport activity, feeling that this activity offsets the physical burden of backshift. The author's experience working shifts for 10 years and his reported increase in alertness through exercise and diet manipulation are described in this paper.
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference.
Palumbo, Letizia; Ruta, Nicole; Bertamini, Marco
2015-01-01
Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT) to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words), valence (positive vs. negative words) and gender (female vs. male names). Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC) task we tested with a stick figure (i.e., the manikin) approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference. PMID:26460610
Prediction and quickening in perspective flight displays for curved landing approaches
NASA Technical Reports Server (NTRS)
Jensen, R. S.
1981-01-01
In an empirical test of various prediction and quickening display algorithms, 18 professional pilot-subjects made four curved-path landing approaches in a GAT-2 simulator using each of 18 dynamically different display configurations in a within-subject design. Results indicate that second- and third-order predictor displays provide the best lateral performance. Intermediate levels of prediction and quickening provide best vertical control. Prediction quickening algorithms of increasing computational order significantly reduce aileron, rudder, and elevator control responses, reflecting successive reductions in cockpit work load. Whereas conventional crosspointer displays are not adequate for curved landing approaches, perspective displays with predictors and some vertical dimension quickening are highly effective.
An Intuitive Approach to Geometric Continuity for Parametric Curves and Surfaces (Extended Abstract)
NASA Technical Reports Server (NTRS)
Derose, T. D.; Barsky, B. A.
1985-01-01
The notion of geometric continuity is extended to an arbitrary order for curves and surfaces, and an intuitive development of constraints equations is presented that are necessary for it. The constraints result from a direct application of the univariate chain rule for curves, and the bivariate chain rule for surfaces. The constraints provide for the introduction of quantities known as shape parameters. The approach taken is important for several reasons: First, it generalizes geometric continuity to arbitrary order for both curves and surfaces. Second, it shows the fundamental connection between geometric continuity of curves and geometric continuity of surfaces. Third, due to the chain rule derivation, constraints of any order can be determined more easily than derivations based exclusively on geometric measures.
A Global Fitting Approach For Doppler Broadening Thermometry
NASA Astrophysics Data System (ADS)
Amodio, Pasquale; Moretti, Luigi; De Vizia, Maria Domenica; Gianfrani, Livio
2014-06-01
Very recently, a spectroscopic determination of the Boltzmann constant, kB, has been performed at the Second University of Naples by means of a rather sophisticated implementation of Doppler Broadening Thermometry (DBT)1. Performed on a 18O-enriched water sample, at a wavelength of 1.39 µm, the experiment has provided a value for kB with a combined uncertainty of 24 parts over 106, which is the best result obtained so far, by using an optical method. In the spectral analysis procedure, the partially correlated speed-dependent hard-collision (pC-SDHC) model was adopted. The uncertainty budget has clearly revealed that the major contributions come from the statistical uncertainty (type A) and from the uncertainty associated to the line-shape model (type B)2. In the present work, we present the first results of a theoretical and numerical work aimed at reducing these uncertainty components. It is well known that molecular line shapes exhibit clear deviations from the time honoured Voigt profile. Even in the case of a well isolated spectral line, under the influence of binary collisions, in the Doppler regime, the shape can be quite complicated by the joint occurrence of velocity-change collisions and speed-dependent effects. The partially correlated speed-dependent Keilson-Storer profile (pC-SDKS) has been recently proposed as a very realistic model, capable of reproducing very accurately the absorption spectra for self-colliding water molecules, in the near infrared3. Unfortunately, the model is so complex that it cannot be implemented into a fitting routine for the analysis of experimental spectra. Therefore, we have developed a MATLAB code to simulate a variety of H218O spectra in thermodynamic conditions identical to the one of our DBT experiment, using the pC-SDKS model. The numerical calculations to determine such a profile have a very large computational cost, resulting from a very sophisticated iterative procedure. Hence, the numerically simulated spectra
Cobaleda, C; García-Sastre, A; Villar, E
1994-01-01
The kinetics of fusion between Newcastle disease virus and erythrocyte ghosts has been investigated with the octadecyl Rhodamine B chloride assay [Hoekstra, De Boer, Klappe, and Wilschut (1984) Biochemistry 23, 5675-5681], and the data from the dequenching curves were fitted by non-linear regression to currently used kinetic models. We used direct computer-assisted fitting of the dequenching curves to the mathematical equations. Discrimination between models was performed by statistical analysis of different fits. The experimental data fit the exponential model previously published [Nir, Klappe, and Hoekstra (1986) Biochemistry 25, 2155-2161] but we describe for the first time that the best fit was achieved for the sum of two exponential terms: A1[1-exp(-k1t)]+A2[1-exp(-k2t)]. The first exponential term represents a fast reaction and the second a slow dequenching reaction. These findings reveal the existence of two independent, but simultaneous, processes during the fusion assay. In order to challenge the model and to understand the meaning of both equation, fusion experiments were carried out under different conditions well known to affect viral fusion (changes in pH, temperature and ghost concentration, and the presence of disulphide-reducing agents or inhibitors of viral neuraminidase activity), and the same computer fitting scheme was followed. The first exponential equation represents the viral protein-dependent fusion process itself, because it is affected by the assay conditions. The second exponential equation accounts for a nonspecific reaction, because it is completely independent of the assay conditions and hence of the viral proteins. An interpretation of this second process is discussed in terms of probe transfer between vesicles. PMID:8002938
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-01-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about –0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%. PMID:23724368
Effects of curved approach paths and advanced displays on pilot scan patterns
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Mixon, R. W.
1981-01-01
The effect on pilot scan behavior of both advanced cockpit and advanced manuevers was assessed. A series of straight-in and curved landing approaches were performed in the Terminal Configured Vehicle (TCV) simulator. Two comparisons of pilot scan behavior were made: (1) pilot scan behavior for straight-in approaches compared with scan behavior previously obtained in a conventionally equipped simulator, and (2) pilot scan behavior for straight-in approaches compared with scan behavior for curved approaches. The results indicate very similar scanning patterns during the straight-in approaches in the conventional and advanced cockpits. However, for the curved approaches pilot attention shifted to the electronic horizontal situation display (moving map), and a new eye scan path appeared between the map and the airspeed indicator. The very high dwell percentage and dwell times on the electronic displays in the TCV simulator during the final portions of the approaches suggest that the electronic attitude direction indicator was well designed for these landing approaches.
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
Basal ganglia necrosis: a 'best-fit' approach.
Boca, Mihaela; Lloyd, Katie; Likeman, Marcus; Jardine, Philip; Whone, Alan
2016-12-01
A previously well 16-year-old boy developed a rapid-onset hypokinetic syndrome, coupled with a radiological appearance of extensive and highly symmetrical basal ganglia and white matter change. The diagnostic process was challenging and we systematically considered potential causes. After excluding common causes of this clinico-radiological picture, we considered common disorders with this unusual radiological picture and vice versa, before finally concluding that this was a rare presentation of a rare disease. We considered the broad categories of: metabolic; toxic; infective; inflammatory, postinfective and immune-mediated; neoplastic; paraneoplastic and heredodegenerative. Long-term follow-up gave insight into the nature of the insult, confirming the monophasic course. During recovery, and following presumed secondary aberrant reinnervation, his disorder evolved from predominantly hypokinetic to hyperkinetic. Here, we explore the process of finding a 'best-fit' diagnosis: in this case, acute necrotising encephalopathy. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
ERIC Educational Resources Information Center
Chernyshenko, Oleksandr S.; Stark, Stephen; Williams, Alex
2009-01-01
The purpose of this article is to offer a new approach to measuring person-organization (P-O) fit, referred to here as "Latent fit." Respondents were administered unidimensional forced choice items and were asked to choose the statement in each pair that better reflected the correspondence between their values and those of the…
NASA Astrophysics Data System (ADS)
Bhattacharya, Kolahal; Banerjee, Sudeshna; Mondal, Naba K.
2016-07-01
In the context of track fitting problems by a Kalman filter, the appropriate functional forms of the elements of the random process noise matrix are derived for tracking through thick layers of dense materials and magnetic field. This work complements the form of the process noise matrix obtained by Mankel [1].
Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.
Consensus among flexible fitting approaches improves the interpretation of cryo-EM data
Ahmed, Aqeel; Whitford, Paul C.; Sanbonmatsu, Karissa Y.; Tama, Florence
2011-01-01
Cryo-elecron microscopy (Cryo-EM) can provide important structural information of large macromolecular assemblies in different conformational states. Recent years have seen an increase in structures deposited in the Protein Data Bank (PDB) by fitting a high-resolution structure into its low-resolution cryo-EM map. A commonly used protocol for accommodating the conformational changes between the X-ray structure and the cryo-EM map is rigid body fitting of individual domains. With the emergence of different flexible fitting approaches, there is a need to compare and revise these different protocols for the fitting. We have applied three diverse automated flexible fitting approaches on a protein dataset for which rigid domain fitting (RDF) models have been deposited in the PDB. In general, a consensus is observed in the conformations, which indicates a convergence from these theoretically different approaches to the most probable solution corresponding to the cryo-EM map. However, the result shows that the convergence might not be observed for proteins with complex conformational changes or with missing densities in cryo-EM map. In contrast, RDF structures deposited in the PDB can represent conformations that not only differ from the consensus obtained by flexible fitting but also from X-ray crystallography. Thus, this study emphasizes that a “consensus” achieved by the use of several automated flexible fitting approaches can provide a higher level of confidence in the modeled configurations. Following this protocol not only increases the confidence level of fitting, but also highlights protein regions with uncertain fitting. Hence, this protocol can lead to better interpretation of cryo-EM data. PMID:22019767
An optimization approach for fitting canonical tensor decompositions.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2009-02-01
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
Ristanović, D; Ristanović, D; Malesević, J; Milutinović, B
1983-01-01
Plasma kinetics of bromsulphalein (BSP) after a single injection into the bloodstream of the rat with total obstruction of the common bile duct was examined. The concentrations of BSP were determined colorimetrically. A monoexponential plus a general first-degree function in time with four unknown parameters was fitted. Two programs were developed for the Texas Instruments 59 programmable calculator to estimate the values of all the parameters by an iteration procedure. The programs executed at about twice normal speed.
ATWS Analysis with an Advanced Boiling Curve Approach within COBRA 3-CP
Gensler, A.; Knoll, A.; Kuehnel, K.
2007-07-01
In 2005 the German Reactor Safety Commission issued specific requirements on core coolability demonstration for PWR ATWS (anticipated transients without scram). Thereupon AREVA NP performed detailed analyses for all German PWRs. For a German KONVOI plant the results of an ATWS licensing analysis are presented. The plant dynamic behavior is calculated with NLOOP, while the hot channel analysis is performed with the thermal hydraulic computer code COBRA 3-CP. The application of the fuel rod model included in COBRA 3-CP is essential for this type of analysis. Since DNB (departure from nucleate boiling) occurs, the advanced post DNB model (advanced boiling curve approach) of COBRA 3-CP is used. The results are compared with those gained with the standard BEEST model. The analyzed ATWS case is the emergency power case 'loss of main heat sink with station service power supply unavailable'. Due to the decreasing coolant flow rate during the transient the core attains film boiling conditions. The results of the hot channel analysis strongly depend on the performance of the boiling curve model. The BEEST model is based on pool boiling conditions whereas typical PWR conditions - even in most transients - are characterized by forced flow for which the advanced boiling curve approach is particularly suitable. Compared with the BEEST model the advanced boiling curve approach in COBRA 3-CP yields earlier rewetting, i.e. a shorter period in film boiling. Consequently, the fuel rod cladding temperatures, that increase significantly due to film boiling, drop back earlier and the high temperature oxidation is significantly diminished. The Baker-Just-Correlation was used to calculate the value of equivalent cladding reacted (ECR), i.e. the reduction of cladding thickness due to corrosion throughout the transient. Based on the BEEST model the ECR value amounts to 0.4% whereas the advanced boiling curve only leads to an ECR value of 0.2%. Both values provide large margins to the 17
Hill, K.
1988-06-01
The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.
Liao, Fei; Zhu, Xiao-Yun; Wang, Yong-Mei; Zuo, Yu-Ping
2005-01-31
The estimation of enzyme kinetic parameters by nonlinear fitting reaction curve to the integrated Michaelis-Menten rate equation ln(S(0)/S)+(S(0)-S)/K(m)=(V(m)/K(m))xt was investigated and compared to that by fitting to (S(0)-S)/t=V(m)-K(m)x[ln(S(0)/S)/t] (Atkins GL, Nimmo IA. The reliability of Michaelis-Menten constants and maximum velocities estimated by using the integrated Michaelis-Menten equation. Biochem J 1973;135:779-84) with uricase as the model. Uricase reaction curve was simulated with random absorbance error of 0.001 at 0.075 mmol/l uric acid. Experimental reaction curve was monitored by absorbance at 293 nm. For both CV and deviation <20% by simulation, K(m) from 5 to 100 micromol/l was estimated with Eq. (1) while K(m) from 5 to 50 micromol/l was estimated with Eq. (2). The background absorbance and the error in the lag time of steady-state reaction resulted in negative K(m) with Eq. (2), but did not affect K(m) estimated with Eq. (1). Both equations gave better estimation of V(m). The computation time and the goodness of fit with Eq. (1) were 40-fold greater than those with Eq. (2). By experimentation, Eq. (1) yielded K(m) consistent with the Lineweaver-Burk plot analysis, but Eq. (2) gave many negative parameters. Apparent K(m) by Eq. (1) linearly increased, while V(m) were constant, vs. xanthine concentrations, and the inhibition constant was consistent with the Lineweaver-Burk plot analysis. These results suggested that the integrated rate equation that uses the predictor variable of reaction time was reliable for the estimation of enzyme kinetic parameters and applicable for the characterization of enzyme inhibitors.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Degnan, J. J.; Walker, H. E.; Mcelroy, J. H.; Mcavoy, N.; Zagwodski, T.
1972-01-01
A least squares curve-fitting algorithm is derived which allows the simultaneous estimation of the small signal gain and the saturation intensity from an arbitrary number of data points relating power output to the incidence angle of an internal coupling plate. The method is used to study the dependence of the two parameters on tube pressure and discharge current in a waveguide CO2 laser having a 2 mm diameter capillary. It is found that, at pressures greater than 28 torr, rising CO2 temperature degrades the small signal gain at current levels as low as three milliamperes.
Hickson, Louise
2006-01-01
Successful hearing aid fitting occurs when the person fitted wears the aid/s on a regular basis and reports benefit when the aid/s is used. A significant number of people fitted with unilateral or bilateral hearing aids for the first time do not continue to use one or both aids in the long term. In this paper, factors consistently found in previous research to be associated with unsuccessful fitting are explored; in particular, the negative attitudes of some clients towards hearing aids, their lack of motivation for seeking help, inability to identify goals for rehabilitation, and problems with the management of the devices. It is argued here that success in hearing aid fitting involves the same dynamics as found with other assistive technologies (e.g., wheelchairs, walking frames), and is dependent on a match between the characteristics of a prospective user, the technology itself, and the environments of use (Scherer, 2002). It is recommended that for clients who identify concerns about hearing aids, or who are unsure about when they would use them, and/or are likely to have problems with aid management, only one aid be fitted in the first instance, if hearing aid fitting is to proceed at all. Rehabilitation approaches to promote successful fitting are discussed in light of results obtained from a survey of clients who experienced both successful and unsuccessful aid fitting.
Split calibration curve: an approach to avoid repeat analysis of the samples exceeding ULOQ.
Basu, Sudipta; Basit, Abdul; Ravindran, Selvan; Patel, Vandana B; Vangala, Subrahmanyam; Patel, Hitesh
2012-10-01
The current practice of using calibration curves with narrow concentration ranges during bioanalysis of new chemical entities has some limitations and is time consuming. In the present study we describe a split calibration curve approach, where sample dilution and repeat analysis can be avoided without compromising the quality and integrity of the data obtained. A split calibration curve approach is employed to determine the drug concentration in plasma samples with accuracy and precision over a wide dynamic range of approximately 0.6 to 15,000 ng/ml for dapsone and approximately 1 to 25,000 ng/ml for cyclophosphamide and glipizide. A wide dynamic range of concentrations for these three compounds was used in the current study to construct split calibration curves and was successfully validated for sample analysis in a single run. Using this method, repeat analysis of samples can be avoided. This is useful for the bioanalysis of toxicokinetic studies with wide dose ranges and studies where the sample volume is limited.
A Bayesian approach for estimating calibration curves and unknown concentrations in immunoassays.
Feng, Feng; Sales, Ana Paula; Kepler, Thomas B
2011-03-01
Immunoassays are primary diagnostic and research tools throughout the medical and life sciences. The common approach to the processing of immunoassay data involves estimation of the calibration curve followed by inversion of the calibration function to read off the concentration estimates. This approach, however, does not lend itself easily to acceptable estimation of confidence limits on the estimated concentrations. Such estimates must account for uncertainty in the calibration curve as well as uncertainty in the target measurement. Even point estimates can be problematic: because of the non-linearity of calibration curves and error heteroscedasticity, the neglect of components of measurement error can produce significant bias. We have developed a Bayesian approach for the estimation of concentrations from immunoassay data that treats the propagation of measurement error appropriately. The method uses Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution of the target concentrations and numerically compute the relevant summary statistics. Software implementing the method is freely available for public use. The new method was tested on both simulated and experimental datasets with different measurement error models. The method outperformed the common inverse method on samples with large measurement errors. Even in cases with extreme measurements where the common inverse method failed, our approach always generated reasonable estimates for the target concentrations. Project name: Baecs; Project home page: www.computationalimmunology.org/utilities/; Operating systems: Linux, MacOS X and Windows; Programming language: C++; License: Free for Academic Use.
A One-Phase Approach for Predicting the Melting Curve of MgO
NASA Astrophysics Data System (ADS)
Okamoto, Kazuma; Fuchizaki, Kazuhiro
2017-06-01
The melting curve of MgO, an important compound dissociated from the component of the Earth's lower mantle, was predicted in this work by using a one-phase approach. The existing data for the melting points under pressures were used as input. The necessary thermodynamic information was supplemented by constructing the equation of state. The melting point near the core-mantle boundary was estimated to be approximately 6000 K.
Lebon, M; Reiche, I; Fröhlich, F; Bahain, J-J; Falguères, C
2008-12-01
Derivative Fourier transform infrared (FTIR) spectroscopy and curve fitting have been used to investigate the effect of a thermal treatment on the nu(1)nu(3) PO(4) domain of modern bones. This method was efficient for identifying mineral matter modifications during heating. In particular, the 961, 1022, 1061, and 1092 cm(-1) components show an important wavenumber shift between 120 and 700 degrees C, attributed to the decrease of the distortions induced by the removal of CO(3)(2-) and HPO(4)(2-) ions from the mineral lattice. The so-called 1030/1020 ratio was used to evaluate crystalline growth above 600 degrees C. The same analytical protocol was applied on Magdalenian fossil bones from the Bize-Tournal Cave (France). Although the band positions seem to have been affected by diagenetic processes, a wavenumber index--established by summing of the 961, 1022, and 1061 cm(-1) peak positions--discriminated heated bones better than the 1030/1020 ratio, and the splitting factor frequently used to identify burnt bones in an archaeological context. This study suggest that the combination of derivative and curve-fitting analysis may afford a sensitive evaluation of the maximum temperature reached, and thus contribute to the fossil-derived knowledge of human activities related to the use of fire.
Song, Zhi-li; Li, Sheng; George, Thomas F
2010-01-18
Through retrofitting the descriptor of a scale-invariant feature transform (SIFT) and developing a new similarity measure function based on trajectories generated from Lissajous curves, a new remote sensing image registration approach is constructed, which is more robust and accurate than prior approaches. In complex cases where the correct rate of feature matching is below 20%, the retrofitted SIFT descriptor improves the correct rate to nearly 100%. Mostly, the similarity measure function makes it possible to quantitatively analyze the temporary change of the same geographic position.
CADAVERIC STUDY ON THE LEARNING CURVE OF THE TWO-APPROACH GANZ PERIACETABULAR OSTEOTOMY
Ferro, Fernando Portilho; Ejnisman, Leandro; Miyahara, Helder Souza; Trindade, Christiano Augusto de Castro; Faga, Antônio; Vicente, José Ricardo Negreiros
2016-01-01
Objective : The Bernese periacetabular osteotomy (PAO) is a widely used technique for the treatment of non-arthritic, dysplastic, painful hips. It is considered a highly complex procedure with a steep learning curve. In an attempt to minimize complications, a double anterior-posterior approach has been described. We report on our experience while performing this technique on cadaveric hips followed by meticulous dissection to verify possible complications. Methods : We operated on 15 fresh cadaveric hips using a combined posterior Kocher-Langenbeck and an anterior Smith-Petersen approach, without fluoroscopic control. The PAO cuts were performed and the acetabular fragment was mobilized. A meticulous dissection was carried out to verify the precision of the cuts. Results : Complications were observed in seven specimens (46%). They included a posterior column fracture, and posterior and anterior articular fractures. The incidence of complications decreased over time, from 60% in the first five procedures to 20% in the last five procedures. Conclusions : We concluded that PAO using a combined anterior-posterior approach is a reproducible technique that allows all cuts to be done under direct visualization. The steep learning curve described in the classic single incision approach was also observed when using two approaches. Evidence Level: IV, Cadaveric Study. PMID:26981046
B-737 flight test of curved-path and steep-angle approaches using MLS guidance
NASA Technical Reports Server (NTRS)
Branstetter, J. R.; White, W. F.
1989-01-01
A series of flight tests were conducted to collect data for jet transport aircraft flying curved-path and steep-angle approaches using Microwave Landing System (MLS) guidance. During the test, 432 approaches comprising seven different curved-paths and four glidepath angles varying from 3 to 4 degrees were flown in NASA Langley's Boeing 737 aircraft (Transport Systems Research Vehicle) using an MLS ground station at the NASA Wallops Flight Facility. Subject pilots from Piedmont Airlines flew the approaches using conventional cockpit instrumentation (flight director and Horizontal Situation Indicator (HSI). The data collected will be used by FAA procedures specialists to develop standards and criteria for designing MLS terminal approach procedures (TERPS). The use of flight simulation techniques greatly aided the preliminary stages of approach development work and saved a significant amount of costly flight time. This report is intended to complement a data report to be issued by the FAA Office of Aviation Standards which will contain all detailed data analysis and statistics.
Duval, M; Guilarte Moreno, V; Grün, R
2013-12-01
This work deals with the specific studies of three main sources of uncertainty in electron spin resonance (ESR) dosimetry/dating of fossil tooth enamel: (1) the precision of the ESR measurements, (2) the long-term signal fading the selection of the fitting function. They show a different influence on the equivalent dose (D(E)) estimates. Repeated ESR measurements were performed on 17 different samples: results show a mean coefficient of variation of the ESR intensities of 1.20 ± 0.23 %, inducing a mean relative variability of 3.05 ± 2.29 % in the D(E) values. ESR signal fading over 5 y was also observed: its magnitude seems to be quite sample dependant but is nevertheless especially important for the most irradiated aliquots. This fading has an apparent random effect on the D(E) estimates. Finally, the authors provide new insights and recommendations about the fitting of ESR dose-response curves of fossil enamel with a double saturating exponential (DSE) function. The potential of a new variation of the DSE was also explored. Results of this study also show that the choice of the fitting function is of major importance, maybe more than the other sources previously mentioned, in order to get accurate final D(E) values.
Konzen, Kevin; Brey, Richard
2012-05-01
²²²Rn (radon) and ²²⁰Rn (thoron) progeny are known to interfere with determining the presence of long-lived transuranic radionuclides, such as plutonium and americium, and require from several hours up to several days for conclusive results. Methods are proposed that should expedite the analysis of air samples for determining the amount of transuranic radionuclides present using low-resolution alpha spectroscopy systems available from typical alpha continuous air monitors (CAMs) with multi-channel analyzer (MCA) capabilities. An alpha spectra simulation program was developed in Microsoft Excel visual basic that employed the use of Monte Carlo numerical methods and serial-decay differential equations that resembled actual spectra. Transuranic radionuclides were able to be quantified with statistical certainty by applying peak fitting equations using the method of least squares. Initial favorable results were achieved when samples containing radon progeny were decayed 15 to 30 min, and samples containing both radon and thoron progeny were decayed at least 60 min. The effort indicates that timely decisions can be made when determining transuranic activity using available alpha CAMs with alpha spectroscopy capabilities for counting retrospective air samples if accompanied by analyses that consider the characteristics of serial decay.
Nair, S P; Righetti, R
2015-05-07
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Fernández-Portales, Javier; Valdesuso, Raúl; Carreras, Raúl; Jiménez-Candil, Javier; Serrador, Ana; Romaní, Sebastián
2006-10-01
There are anatomical differences between right and left radial artery approaches for coronary catheterization that could influence application of the technique. We present the results of a randomized study that compared the effectiveness of the two approaches and identified factors associated with failure of the procedure. The study involved 351 consecutive patients: a left radial approach was used in 180, and a right radial approach, in 171. The procedure could not be completed using the initial approach selected in 15 patients (11 right radial vs. 4 left radial; P=.007). Use of a right radial approach, lack of catheterization experience, patient age >70 years, and the absence of hypertension were found to be independently associated with prolonged fluoroscopy duration and failure using the initial approach. Use of the right radial approach in patients aged over 70 years was associated with a 6-fold increase in the risk of an adverse event. Consequently, use of the right radial approach should be avoided in patients aged over 70 years when trainee practitioners are on the learning curve.
Analysis of epistatic interactions and fitness landscapes using a new geometric approach
Beerenwinkel, Niko; Pachter, Lior; Sturmfels, Bernd; Elena, Santiago F; Lenski, Richard E
2007-01-01
Background Understanding interactions between mutations and how they affect fitness is a central problem in evolutionary biology that bears on such fundamental issues as the structure of fitness landscapes and the evolution of sex. To date, analyses of fitness landscapes have focused either on the overall directional curvature of the fitness landscape or on the distribution of pairwise interactions. In this paper, we propose and employ a new mathematical approach that allows a more complete description of multi-way interactions and provides new insights into the structure of fitness landscapes. Results We apply the mathematical theory of gene interactions developed by Beerenwinkel et al. to a fitness landscape for Escherichia coli obtained by Elena and Lenski. The genotypes were constructed by introducing nine mutations into a wild-type strain and constructing a restricted set of 27 double mutants. Despite the absence of mutants higher than second order, our analysis of this genotypic space points to previously unappreciated gene interactions, in addition to the standard pairwise epistasis. Our analysis confirms Elena and Lenski's inference that the fitness landscape is complex, so that an overall measure of curvature obscures a diversity of interaction types. We also demonstrate that some mutations contribute disproportionately to this complexity. In particular, some mutations are systematically better than others at mixing with other mutations. We also find a strong correlation between epistasis and the average fitness loss caused by deleterious mutations. In particular, the epistatic deviations from multiplicative expectations tend toward more positive values in the context of more deleterious mutations, emphasizing that pairwise epistasis is a local property of the fitness landscape. Finally, we determine the geometry of the fitness landscape, which reflects many of these biologically interesting features. Conclusion A full description of complex fitness
Devereux, Mike; Gresh, Nohad; Piquemal, Jean-Philip; Meuwly, Markus
2014-08-05
A supervised, semiautomated approach to force field parameter fitting is described and applied to the SIBFA polarizable force field. The I-NoLLS interactive, nonlinear least squares fitting program is used as an engine for parameter refinement while keeping parameter values within a physical range. Interactive fitting is shown to avoid many of the stability problems that frequently afflict highly correlated, nonlinear fitting problems occurring in force field parametrizations. The method is used to obtain parameters for the H2O, formamide, and imidazole molecular fragments and their complexes with the Mg(2+) cation. Reference data obtained from ab initio calculations using an auc-cc-pVTZ basis set exploit advances in modern computer hardware to provide a more accurate parametrization of SIBFA than has previously been available.
A Bayesian approach for estimating calibration curves and unknown concentrations in immunoassays
Feng, Feng; Sales, Ana Paula; Kepler, Thomas B.
2011-01-01
Motivation: Immunoassays are primary diagnostic and research tools throughout the medical and life sciences. The common approach to the processing of immunoassay data involves estimation of the calibration curve followed by inversion of the calibration function to read off the concentration estimates. This approach, however, does not lend itself easily to acceptable estimation of confidence limits on the estimated concentrations. Such estimates must account for uncertainty in the calibration curve as well as uncertainty in the target measurement. Even point estimates can be problematic: because of the non-linearity of calibration curves and error heteroscedasticity, the neglect of components of measurement error can produce significant bias. Methods: We have developed a Bayesian approach for the estimation of concentrations from immunoassay data that treats the propagation of measurement error appropriately. The method uses Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution of the target concentrations and numerically compute the relevant summary statistics. Software implementing the method is freely available for public use. Results: The new method was tested on both simulated and experimental datasets with different measurement error models. The method outperformed the common inverse method on samples with large measurement errors. Even in cases with extreme measurements where the common inverse method failed, our approach always generated reasonable estimates for the target concentrations. Availability: Project name: Baecs; Project home page: www.computationalimmunology.org/utilities/; Operating systems: Linux, MacOS X and Windows; Programming language: C++; License: Free for Academic Use. Contact: feng.feng@duke.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21149344
Moshirfar, Majid; Calvo, Charles M; Kinard, Krista I; Williams, Lloyd B; Sikder, Shameema; Neuffer, Marcus C
2011-01-01
This study analyzes the characteristics of donor and recipient tissue preparation between the Hessburg-Barron and Hanna punch and trephine systems by using elliptical curve fitting models, light microscopy, and anterior segment optical coherence tomography (AS-OCT). Eight millimeter Hessburg-Barron and Hanna vacuum trephines and punches were used on six cadaver globes and six corneal-scleral rims, respectively. Eccentricity data were generated using measurements from photographs of the corneal buttons and were used to generate an elliptical curve fit to calculate properties of the corneal button. The trephination angle and punch angle were measured by digital protractor software from light microscopy and AS-OCT images to evaluate the consistency with which each device cuts the cornea. The Hanna trephine showed a trend towards producing a more circular recipient button than the Barron trephine (ratio of major axis to minor axis), ie, 1.059 ± 0.041 versus 1.110 ± 0.027 (P = 0.147) and the Hanna punch showed a trend towards producing a more circular donor cut than the Barron punch, ie, 1.021 ± 0.022 versus 1.046 ± 0.039 (P = 0.445). The Hanna trephine was demonstrated to have a more consistent trephination angle than the Barron trephine when assessing light microscopy images, ie, ±14.39° (95% confidence interval [CI] 111.9-157.7) versus ±19.38° (95% CI 101.9-150.2, P = 0.492) and OCT images, ie, ±8.08° (95% CI 106.2-123.3) versus ±11.16° (95% CI 109.3-132.6, P = 0.306). The angle created by the Hanna punch had less variability than the Barron punch from both the light microscopy, ie, ±4.81° (95% CI 101.6-113.9) versus ±11.28° (95% CI 84.5-120.6, P = 0.295) and AS-OCT imaging, ie, ±9.96° (95% CI 95.7-116.4) versus ±14.02° (95% CI 91.8-123.7, P = 0.825). Statistical significance was not achieved. The Hanna trephine and punch may be more accurate and consistent in cutting corneal buttons than the Hessburg-Barron trephine and punch when evaluated using
Moshirfar, Majid; Calvo, Charles M; Kinard, Krista I; Williams, Lloyd B; Sikder, Shameema; Neuffer, Marcus C
2011-01-01
Background: This study analyzes the characteristics of donor and recipient tissue preparation between the Hessburg-Barron and Hanna punch and trephine systems by using elliptical curve fitting models, light microscopy, and anterior segment optical coherence tomography (AS-OCT). Methods: Eight millimeter Hessburg-Barron and Hanna vacuum trephines and punches were used on six cadaver globes and six corneal-scleral rims, respectively. Eccentricity data were generated using measurements from photographs of the corneal buttons and were used to generate an elliptical curve fit to calculate properties of the corneal button. The trephination angle and punch angle were measured by digital protractor software from light microscopy and AS-OCT images to evaluate the consistency with which each device cuts the cornea. Results: The Hanna trephine showed a trend towards producing a more circular recipient button than the Barron trephine (ratio of major axis to minor axis), ie, 1.059 ± 0.041 versus 1.110 ± 0.027 (P = 0.147) and the Hanna punch showed a trend towards producing a more circular donor cut than the Barron punch, ie, 1.021 ± 0.022 versus 1.046 ± 0.039 (P = 0.445). The Hanna trephine was demonstrated to have a more consistent trephination angle than the Barron trephine when assessing light microscopy images, ie, ±14.39° (95% confidence interval [CI] 111.9–157.7) versus ±19.38° (95% CI 101.9–150.2, P = 0.492) and OCT images, ie, ±8.08° (95% CI 106.2–123.3) versus ±11.16° (95% CI 109.3–132.6, P = 0.306). The angle created by the Hanna punch had less variability than the Barron punch from both the light microscopy, ie, ±4.81° (95% CI 101.6–113.9) versus ±11.28° (95% CI 84.5–120.6, P = 0.295) and AS-OCT imaging, ie, ±9.96° (95% CI 95.7–116.4) versus ±14.02° (95% CI 91.8–123.7, P = 0.825). Statistical significance was not achieved. Conclusion: The Hanna trephine and punch may be more accurate and consistent in cutting corneal buttons than
Understanding the distribution of fitness effects of mutations by a biophysical-organismal approach
NASA Astrophysics Data System (ADS)
Bershtein, Shimon
2011-03-01
The distribution of fitness effects of mutations is central to many questions in evolutionary biology. However, it remains poorly understood, primarily due to the fact that a fundamental connection that exists between the fitness of organisms and molecular properties of proteins encoded by their genomes is largely overlooked by traditional research approaches. Past efforts to breach this gap followed the ``evolution first'' paradigm, whereby populations were subjected to selection under certain conditions, and mutations which emerged in adapted populations were analyzed using genomic approaches. The results obtained in the framework of this approach, while often useful, are not easily interpretable because mutations get fixed due to a convolution of multiple causes. We have undertaken a conceptually opposite strategy: Mutations with known biophysical and biochemical effects on E. coli's essential proteins (based on computational analysis and in vitro measurements) were introduced into the organism's chromosome and the resulted fitness effects were monitored. Studying the distribution of fitness effects of such fully controlled replacements revealed a very complex fitness landscape, where impact of the microscopic properties of the mutated proteins (folding, stability, and function) is modulated on a macroscopic, whole genome level. Furthermore, the magnitude of the cellular response to the introduced mutations seems to depend on the thermodynamic status of the mutant.
A new semi-empirical approach to performance curves of polymer electrolyte fuel cells
NASA Astrophysics Data System (ADS)
Pisani, L.; Murgia, G.; Valentini, M.; D'Aguanno, B.
We derive a semi-empirical equation to describe the performance curves of polymer electrolyte membrane fuel cells (PEMFCs). The derivation is based on the observation that the main non-linear contributions to the cell voltage deterioration of H 2/air feed cells are deriving from the cathode reactive region. To evaluate such contributions we assumed that the diffusion region of the cathode is made by a network of pores able to transport gas and liquid mixtures, while the reactive region is made by a different network of pores for gas transport in a liquid permeable matrix. The mathematical model is largely mechanistic, with most terms deriving from phenomenological mass transport and conservation equations. The only full empirical term in the performance equation is the Ohmic overpotential, which is assumed to be linear with the cell current density. The resulting equation is similar to other published performance equations but with the advantage of having coefficients with a precise physical origin, and a precise physical meaning. Our semi-empirical equation is used to fit several set of published experimental data, and the fits showed always a good agreement between the model results and the experimental data. The values of the fitting coefficients, together with their associated physical meaning, allow us to asses and quantify the phenomenology which is set on in the cathode as the cell current density is increased. More precisely, we observe the development of the flooding and of the local decrease of the oxygen concentration. Further developments of such a model for the cathode compartment of the fuel cell are discussed.
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
Guidance studies for curved, descending approaches using the Microwave Landing System (MLS)
NASA Technical Reports Server (NTRS)
Feather, J. B.
1986-01-01
Results for the Microwave Landing System (MLS) guidance algorithm development conducted under the Advance Transport Operating System (ATOPS) Technology Studies (NAS1-16202) are documented. The study consisted of evaluating guidance law for vertical and lateral path control, as well as speed control, for approaches not possible with the present Instrument Landing System (ILS) equipment. Several specific approaches were simulated using the MD-80 aircraft simulation program, including curved, descending (segmented glide slope), and decelerating paths. Emphasis was placed on development of guidance algorithms specifically for approaches at Burbank, where proposed flight demonstrations are planned. Results of this simulation phase are suitable for use in future fixed base simulator evaluations employing actual hardware (autopilot and a performance management system).
Estimating flood-frequency curves with scarce data: a physically-based analytic approach
NASA Astrophysics Data System (ADS)
Basso, Stefano; Schirmer, Mario; Botter, Gianluca
2016-04-01
Predicting magnitude and frequency of floods is a key issue for hazard assessment and mitigation. While observations and statistical methods provide good estimates when long data series are available, their performances deteriorate with limited data. Moreover, the outcome of varying hydroclimatic drivers can hardly be evaluated by these methods. Physically-based approaches embodying mechanics of streamflow generation provide a valuable alternative that may improve purely statistical estimates and cope with human-induced alteration of climate and landscape. In this work, a novel analytic approach is proposed to derive seasonal flood-frequency curves, and to estimate the recurrence intervals of seasonal maxima. The method builds on a stochastic description of daily streamflows, arising from rainfall and soil moisture dynamics in the catchment. The limited number of parameters involved in the formulation embody climate and landscape attributes of the contributing catchment, and can be specified based on daily rainfall and streamflow data. The application to two case studies suggests the model ability to provide reliable estimates of seasonal flood-frequency curves in different climatic settings, and to mimic shapes of flood-frequency curves emerging in persistent and erratic flow regimes. The method is especially valuable when only short data series are available (e.g. newly or temporarily gauged catchments, modified climatic or landscape features). Indeed, estimates provided by the model for high flow events characterized by recurrence times greater than the available sample size do not deteriorate significantly, as compared to performance of purely statistical methods. The proposed physically-based analytic approach represent a first step toward a probabilistic characterization of extremes based on climate and landscape attributes, which may be especially valuable to assess flooding hazard in data scarce regions and support the development of reliable mitigation
Using a Space Filling Curve Approach for the Management of Dynamic Point Clouds
NASA Astrophysics Data System (ADS)
Psomadaki, S.; van Oosterom, P. J. M.; Tijssen, T. P. M.; Baart, F.
2016-10-01
Point cloud usage has increased over the years. The development of low-cost sensors makes it now possible to acquire frequent point cloud measurements on a short time period (day, hour, second). Based on the requirements coming from the coastal monitoring domain, we have developed, implemented and benchmarked a spatio-temporal point cloud data management solution. For this reason, we make use of the flat model approach (one point per row) in an Index Organised Table within a RDBMS and an improved spatio-temporal organisation using a Space Filling Curve approach. Two variants coming from two extremes of the space-time continuum are also taken into account, along with two treatments of the z dimension: as attribute or as part of the space filling curve. Through executing a benchmark we elaborate on the performance - loading and querying time -, and storage required by those different approaches. Finally, we validate the correctness and suitability of our method, through an out-of-the-box way of managing dynamic point clouds.
NASA Technical Reports Server (NTRS)
Benner, M. S.; Sawyer, R. H.; Mclaughlin, M. D.
1973-01-01
A real-time, fixed-base simulation study has been conducted to determine the curved, descending approach paths (within passenger-comfort limits) that would be acceptable to pilots, the flight-director-system logic requirements for curved-flight-path guidance, and the paths which can be flown within proposed microwave landing system (MLS) coverage angles. Two STOL aircraft configurations were used in the study. Generally, no differences in the results between the two STOL configurations were found. The investigation showed that paths with a 1828.8 meter turn radius and a 1828.8 meter final-approach distance were acceptable without winds and with winds up to at least 15 knots for airspeeds from 75 to 100 knots. The altitude at roll-out from the final turn determined which final-approach distances were acceptable. Pilots preferred to have an initial straight leg of about 1 n. mi. after MLS guidance acquisition before turn intercept. The size of the azimuth coverage angle necessary to meet passenger and pilot criteria depends on the size of the turn angle: plus or minus 60 deg was adequate to cover all paths execpt ones with a 180 deg turn.
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests.
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus "infusing" age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs.
Age-Infusion Approach to Derive Injury Risk Curves for Dummies from Human Cadaver Tests
Yoganandan, Narayan; Banerjee, Anjishnu; Pintar, Frank A.
2015-01-01
Injury criteria and risk curves are needed for anthropomorphic test devices (dummies) to assess injuries for improving human safety. The present state of knowledge is based on using injury outcomes and biomechanical metrics from post-mortem human subject (PMHS) and mechanical records from dummy tests. Data from these models are combined to develop dummy injury assessment risk curves (IARCs)/dummy injury assessment risk values (IARVs). This simple substitution approach involves duplicating dummy metrics for PMHS tested under similar conditions and pairing with PMHS injury outcomes. It does not directly account for the age of each specimen tested in the PMHS group. Current substitution methods for injury risk assessments use age as a covariate and dummy metrics (e.g., accelerations) are not modified so that age can be directly included in the model. The age-infusion methodology presented in this perspective article accommodates for an annual rate factor that modifies the dummy injury risk assessment responses to account for the age of the PMHS that the injury data were based on. The annual rate factor is determined using human injury risk curves. The dummy metrics are modulated based on individual PMHS age and rate factor, thus “infusing” age into the dummy data. Using PMHS injuries and accelerations from side-impact experiments, matched-pair dummy tests, and logistic regression techniques, the methodology demonstrates the process of age-infusion to derive the IARCs and IARVs. PMID:26697422
Zhang, Gang-Chun; Lin, Hong-Liang; Lin, Shan-Yang
2012-07-01
The cocrystal formation of indomethacin (IMC) and saccharin (SAC) by mechanical cogrinding or thermal treatment was investigated. The formation mechanism and stability of IMC-SAC cocrystal prepared by cogrinding process were explored. Typical IMC-SAC cocrystal was also prepared by solvent evaporation method. All the samples were identified and characterized by using differential scanning calorimetry (DSC) and Fourier transform infrared (FTIR) microspectroscopy with curve-fitting analysis. The physical stability of different IMC-SAC ground mixtures before and after storage for 7 months was examined. The results demonstrate that the stepwise measurements were carried out at specific intervals over a continuous cogrinding process showing a continuous growth in the cocrystal formation between IMC and SAC. The main IR spectral shifts from 3371 to 3,347 cm(-1) and 1693 to 1682 cm(-1) for IMC, as well as from 3094 to 3136 cm(-1) and 1718 to 1735 cm(-1) for SAC suggested that the OH and NH groups in both chemical structures were taken part in a hydrogen bonding, leading to the formation of IMC-SAC cocrystal. A melting at 184 °C for the 30-min IMC-SAC ground mixture was almost the same as the melting at 184 °C for the solvent-evaporated IMC-SAC cocrystal. The 30-min IMC-SAC ground mixture was also confirmed to have similar components and contents to that of the solvent-evaporated IMC-SAC cocrystal by using a curve-fitting analysis from IR spectra. The thermal-induced IMC-SAC cocrystal formation was also found to be dependent on the temperature treated. Different IMC-SAC ground mixtures after storage at 25 °C/40% RH condition for 7 months had an improved tendency of IMC-SAC cocrystallization.
Highly curved image sensors: a practical approach for improved optical performance
NASA Astrophysics Data System (ADS)
Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff
2017-06-01
The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Lin, Shan-Yang; Lin, Hong-Liang; Chi, Ying-Ting; Huang, Yu-Ting; Kao, Chi-Yu; Hsieh, Wei-Hsien
2015-12-30
The amorphous form of a drug has higher water solubility and faster dissolution rate than its crystalline form. However, the amorphous form is less thermodynamically stable and may recrystallize during manufacturing and storage. Maintaining the amorphous state of drug in a solid dosage form is extremely important to ensure product quality. The purpose of this study was to quantitatively determine the amount of amorphous indomethacin (INDO) formed in the Soluplus® solid dispersions using thermoanalytical and Fourier transform infrared (FTIR) spectral curve-fitting techniques. The INDO/Soluplus® solid dispersions with various weight ratios of both components were prepared by air-drying and heat-drying processes. A predominate IR peak at 1683cm(-1) for amorphous INDO was selected as a marker for monitoring the solid state of INDO in the INDO/Soluplus® solid dispersions. The physical stability of amorphous INDO in the INDO/Soluplus® solid dispersions prepared by both drying processes was also studied under accelerated conditions. A typical endothermic peak at 161°C for γ-form of INDO (γ-INDO) disappeared from all the differential scanning calorimetry (DSC) curves of INDO/Soluplus® solid dispersions, suggesting the amorphization of INDO caused by Soluplus® after drying. In addition, two unique IR peaks at 1682 (1681) and 1593 (1591)cm(-1) corresponded to the amorphous form of INDO were observed in the FTIR spectra of all the INDO/Soluplus® solid dispersions. The quantitative amounts of amorphous INDO formed in all the INDO/Soluplus® solid dispersions were increased with the increase of γ-INDO loaded into the INDO/Soluplus® solid dispersions by applying curve-fitting technique. However, the intermolecular hydrogen bonding interaction between Soluplus® and INDO were only observed in the samples prepared by heat-drying process, due to a marked spectral shift from 1636 to 1628cm(-1) in the INDO/Soluplus® solid dispersions. The INDO/Soluplus® solid
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.
2015-12-01
Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
Liang, Junjie; Hu, Youzhu; Zhao, Qiong; Li, Qiang
2015-07-01
Endoscopic thyroidectomy via complete areola approach (ETCAA) is becoming the preferred choice of some patients due to the perfect cosmetic result. Endoscope holder plays an important role in the procedures. Research on the learning curve is helpful in training of endoscope holder and improvement of the whole procedure. This prospective study investigated 100 consecutive patients who underwent ETCAA performed by a single experienced surgeon and a single inexperienced endoscope holder. Patients were equally divided into ten groups chronologically. One-way analysis of variance, Student-Newman-Keuls test, and Pearson Chi square test were used to analyze statistical significance for clinical data. The correlativity between the operative time and the case number, the endoscope holding score and the case number, the operative time and the interval of neighboring procedures, the endoscope holding score and the interval of neighboring procedures were analyzed with linear regression analysis. The mean operative time was 96.30 ± 13.10 min, and the mean endoscope holding score was 74.65 ± 14.08. There were significant differences among the mean operative time (P < 0.0001) and the mean endoscope holding score (P < 0.0001). Multiple comparison revealed that the mean operative time of group 7, 8, 9, 10 were shorter than group 4, 5, 6, meanwhile the mean operative time of group 4, 5, 6 were shorter than group 1, 2, 3. Moreover, the mean endoscope holding score of group 7, 8, 9, 10 were higher than group 4, 5, 6, and the mean endoscope holding score of group 4, 5, 6 were higher than group 1, 2, 3. Linear regression analysis showed negative correlation between the operative time and the case number (r = -0.746, P < 0.0001), positive correlation between the endoscope holding score and the case number (r = 0.765, P < 0.0001), positive correlation between the operative time and the interval of neighboring procedures (r = 0.777, P = 0.008), and negative
NASA Astrophysics Data System (ADS)
Koohestani, Behrooz; Corne, David W.
2009-04-01
The Bandwidth Minimization Problem (BMP) is a graph layout problem which is known to be NP-complete. Since 1960, a considerable number of algorithms have been developed for addressing the BMP. At present, meta-heuristics (such as evolutionary algorithms and tabu search) are popular and successful approaches to the BMP. In such algorithms, the design of the fitness function (i.e. the metric that attempts to guide the search towards high-quality solutions) plays a key role in performance; the fitness function, along with the operators, induce the `search landscape', and careful attention to these issues may lead to landscapes that are more amenable to successful search. For example, rather than simply use the most obvious quality measure (in this case, the bandwidth itself), it is often helpful to design a more informative measure, indicating not only a solutions quality, but also encapsulating (for example) an indication of how distant this particular solution is from even better solutions. In this paper, a new fitness function and an associated new mutation operator are presented for BMP. These are incorporated within a simple Evolutionary Algorithm (EA), and evaluated on a set of 27 instances of the BMP (from the Harwell-Boeing sparse matrix collection). The results of this EA are compared with results obtained by using the standard fitness function (used in almost all previous researches on metaheuristics applied to the BMP). The results indicate clearly that the new fitness function and operator performed provide significantly superior results in the reduction of bandwidth.
Aerobic fitness ecological validity in elite soccer players: a metabolic power approach.
Manzi, Vincenzo; Impellizzeri, Franco; Castagna, Carlo
2014-04-01
The aim of this study was to examine the association between match metabolic power (MP) categories and aerobic fitness in elite-level male soccer players. Seventeen male professional soccer players were tested for VO2max, maximal aerobic speed (MAS), VO2 at ventilatory threshold (VO2VT and %VO2VT), and speed at a selected blood lactate concentration (4 mmol·L(-1), V(L4)). Aerobic fitness tests were performed at the end of preseason and after 12 and 24 weeks during the championship. Aerobic fitness and MP variables were considered as mean of all seasonal testing and of 16 Championship home matches for all the calculations, respectively. Results showed that VO2max (from 0.55 to 0.68), MAS (from 0.52 to 0.72), VO2VT (from 0.72 to 0.83), %VO2maxVT (from 0.62 to 0.65), and V(L4) (from 0.56 to 0.73) were significantly (p < 0.05 to 0.001) large to very large associated with MP variables. These results provide evidence to the ecological validity of aerobic fitness in male professional soccer. Strength and conditioning professionals should consider aerobic fitness in their training program when dealing with professional male soccer players. The MP method resulted an interesting approach for tracking external load in male professional soccer players.
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Xu, Di
2015-01-01
Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…
Bouabidi, A; Talbi, M; Bourichi, H; Bouklouze, A; El Karbane, M; Boulanger, B; Brik, Y; Hubert, Ph; Rozet, E
2012-12-01
An innovative versatile strategy using Total Error has been proposed to decide about the method's validity that controls the risk of accepting an unsuitable assay together with the ability to predict the reliability of future results. This strategy is based on the simultaneous combination of systematic (bias) and random (imprecision) error of analytical methods. Using validation standards, both types of error are combined through the use of a prediction interval or β-expectation tolerance interval. Finally, an accuracy profile is built by connecting, on one hand all the upper tolerance limits, and on the other hand all the lower tolerance limits. This profile combined with pre-specified acceptance limits allows the evaluation of the validity of any quantitative analytical method and thus their fitness for their intended purpose. In this work, the approach of accuracy profile was evaluated on several types of analytical methods encountered in the pharmaceutical industrial field and also covering different pharmaceutical matrices. The four studied examples depicted the flexibility and applicability of this approach for different matrices ranging from tablets to syrups, different techniques such as liquid chromatography, or UV spectrophotometry, and for different categories of assays commonly encountered in the pharmaceutical industry i.e. content assays, dissolution assays, and quantitative impurity assays. The accuracy profile approach assesses the fitness of purpose of these methods for their future routine application. It also allows the selection of the most suitable calibration curve, the adequate evaluation of a potential matrix effect and propose efficient solution and the correct definition of the limits of quantification of the studied analytical procedures.
Corvettes, Curve Fitting, and Calculus
ERIC Educational Resources Information Center
Murawska, Jaclyn M.; Nabb, Keith A.
2015-01-01
Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…
Corvettes, Curve Fitting, and Calculus
ERIC Educational Resources Information Center
Murawska, Jaclyn M.; Nabb, Keith A.
2015-01-01
Sometimes the best mathematics problems come from the most unexpected situations. Last summer, a Corvette raced down a local quarter-mile drag strip. The driver, a family member, provided the spectators with time and distance-traveled data from his time slip and asked "Can you calculate how many seconds it took me to go from 0 to 60…
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
Connecting thermal performance curve variation to the genotype: a multivariate QTL approach.
Latimer, C A L; Foley, B R; Chenoweth, S F
2015-01-01
Thermal performance curves (TPCs) are continuous reaction norms that describe the relationship between organismal performance and temperature and are useful for understanding trade-offs involved in thermal adaptation. Although thermal trade-offs such as those between generalists and specialists or between hot- and cold-adapted phenotypes are known to be genetically variable and evolve during thermal adaptation, little is known of the genetic basis to TPCs - specifically, the loci involved and the directionality of their effects across different temperatures. To address this, we took a multivariate approach, mapping quantitative trait loci (QTL) for locomotor activity TPCs in the fly, Drosophila serrata, using a panel of 76 recombinant inbred lines. The distribution of additive genetic (co)variance in the mapping population was remarkably similar to the distribution of mutational (co)variance for these traits. We detected 11 TPC QTL in females and 4 in males. Multivariate QTL effects were closely aligned with the major axes genetic (co)variation between temperatures; most QTL effects corresponded to variation for either overall increases or decreases in activity with a smaller number indicating possible trade-offs between activity at high and low temperatures. QTL representing changes in curve shape such as the 'generalist-specialist' trade-off, thought key to thermal adaptation, were poorly represented in the data. We discuss these results in the light of genetic constraints on thermal adaptation. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
NASA Astrophysics Data System (ADS)
Akhunov, T. A.; Wertz, O.; Elyiv, A.; Gaisin, R.; Artamonov, B. P.; Dudinov, V. N.; Nuritdinov, S. N.; Delvaux, C.; Sergeyev, A. V.; Gusev, A. S.; Bruevich, V. V.; Burkhonov, O.; Zheleznyak, A. P.; Ezhkova, O.; Surdej, J.
2017-03-01
We present new photometric observations of H1413+117 acquired during seasons between 2001 and 2008 in order to estimate the time delays between the lensed quasar images and to characterize at best the on-going micro-lensing events. We propose a highly performing photometric method called the adaptive point spread function fitting and have successfully tested this method on a large number of simulated frames. This has enabled us to estimate the photometric error bars affecting our observational results. We analysed the V- and R-band light curves and V-R colour variations of the A-D components which show short- and long-term brightness variations correlated with colour variations. Using the χ2 and dispersion methods, we estimated the time delays on the basis of the R-band light curves over the seasons between 2003 and 2006. We have derived the new values: ΔtAB = -17.4 ± 2.1, ΔtAC = -18.9 ± 2.8 and ΔtAD = 28.8 ± 0.7 d using the χ2 method (B and C are leading, D is trailing) with 1σ confidence intervals. We also used available observational constraints (resp. the lensed image positions, the flux ratios in mid-IR and two sets of time delays derived in the present work) to update the lens redshift estimation. We obtained z_l = 1.95^{+0.06}_{-0.10} which is in good agreement with previous estimations. We propose to characterize two kinds of micro-lensing events: micro-lensing for the A, B, C components corresponds to typical variations of ∼10-4 mag d-1 during all the seasons, while the D component shows an unusually strong micro-lensing effect with variations of up to ∼10-3 mag d-1 during 2004 and 2005.
NASA Astrophysics Data System (ADS)
Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz
2015-09-01
Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.
NASA Technical Reports Server (NTRS)
White, W. F. (Compiler)
1978-01-01
The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.
NASA Astrophysics Data System (ADS)
Chang, Liyun; Ho, Sheng-Yow; Lee, Tsair-Fwu; Yeh, Shyh-An; Ding, Hueisch-Jy; Chen, Pang-Yu
2015-03-01
EBT2 film is a convenient dosimetry quality-assurance (QA) tool with high 2D dosimetry resolution and a self-development property for use in verifications of radiation therapy treatment planning and special projects; however, the user will suffer from a relatively higher degree of uncertainty (more than ±6% by Hartmann et al. [29]), and the trouble of cutting one piece of film into small pieces and then reintegrating them each time. To prevent this tedious cutting work, and save calibration time and budget, a dose range analysis is presented in this study for EBT2 film calibration using the Percentage-Depth-Dose (PDD) method. Different combinations of the three dose ranges, 9-26 cGy, 33-97 cGy and 109-320 cGy, with two types of curve fitting algorithms, film pixel values and net optical densities converting into doses, were tested and compared. With the lowest error and acceptable inaccuracy of less than 3 cGy for the clinical dose range (9-320 cGy), a single film calibrated by the net optical density algorithm with the dose range 109-320 cGy was suggested for routine calibration.
NASA Astrophysics Data System (ADS)
Khalil-Ur-Rehman; Malik, M. Y.; Bilal, S.; Bibi, M.; Ali, U.
The present analysis is made to envision the characteristics of thermal and solutal stratification on magneto-hydrodynamic mixed convection boundary layer stagnation point flow of non-Newtonian fluid by way of an inclined cylindrical stretching surface. Flow exploration is manifested with heat generation process. The magnitude of temperature and concentration nearby an inclined cylindrical surface is supposed to be higher in strength as compared to the ambient fluid. A suitable similarity transformation is applied to transform the flow conducting equations (mathematically modelled) into system of coupled non-linear ordinary differential equations. The numerical computations are made for these subsequent coupled equations with the source of shooting scheme charted with fifth order Runge-Kutta algorithm. A logarithmic way of study is executed to inspect the impact of various pertinent flow controlling parameters on the dimensionless velocity, temperature and concentration distributions. Further, straight line and parabolic curve fitting is presented for skin friction coefficient, heat and mass transfer rate. It seems to be first step in this direction and will serve as a helping source for the preceding studies.
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255
Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan
2013-01-01
Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees.
New approach in the evaluation of a fitness program at a worksite.
Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T
1999-03-01
The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.
Real time refractivity from clutter using a best fit approach improved with physical information
NASA Astrophysics Data System (ADS)
Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph
2010-02-01
Refractivity from clutter (RFC) retrieves the radio frequency refractive conditions along a propagation path by inverting the measured radar sea clutter return. In this paper, a real-time RFC technique is proposed called "Improved Best Fit" (IBF). It is based on finding the environment with best fit to one of many precomputed, modeled radar returns for different environments in a database. The method is improved by considering the mean slope of the propagation factor, and physical considerations are added: smooth variations of refractive conditions with azimuth and smooth variations of duct height with range. The approach is tested on data from 1998 Wallops Island, Virginia, measurement campaign with good results on most of the data, and questionable results are detected with a confidence criterion. A comparison between the refractivity structures measured during the measurement campaign and the ones retrieved by inversion shows a good match. Radar coverage simulations obtained from these inverted refractivity structures demonstrate the potential utility of IBF.
Sewell, Philip; Noroozi, Siamak; Vinney, John; Amali, Ramin; Andrews, Stephen
2012-01-01
It has been recognised in a review of the developments of lower-limb prosthetic socket fitting processes that the future demands new tools to aid in socket fitting. This paper presents the results of research to design and clinically test an artificial intelligence approach, specifically inverse problem analysis, for the determination of the pressures at the limb/prosthetic socket interface during stance and ambulation. Inverse problem analysis is based on accurately calculating the external loads or boundary conditions that can generate a known amount of strain, stresses or displacements at pre-determined locations on a structure. In this study a backpropagation artificial neural network (ANN) is designed and validated to predict the interfacial pressures at the residual limb/socket interface from strain data collected from the socket surface. The subject of this investigation was a 45-year-old male unilateral trans-tibial (below-knee) traumatic amputee who had been using a prosthesis for 22 years. When comparing the ANN predicted interfacial pressure on 16 patches within the socket with actual pressures applied to the socket there is shown to be 8.7% difference, validating the methodology. Investigation of varying axial load through the subject's prosthesis, alignment of the subject's prosthesis, and pressure at the limb/socket interface during walking demonstrates that the validated ANN is able to give an accurate full-field study of the static and dynamic interfacial pressure distribution. To conclude, a methodology has been developed that enables a prosthetist to quantitatively analyse the distribution of pressures within the prosthetic socket in a clinical environment. This will aid in facilitating the "right first time" approach to socket fitting which will benefit both the patient in terms of comfort and the prosthetist, by reducing the time and associated costs of providing a high level of socket fit. Copyright © 2011 Elsevier B.V. All rights reserved.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-06-26
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. ). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R(2) , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic
2015-04-01
Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24
IEFIT - An Interactive Approach to High Temperature Fusion Plasma Magnetic Equilibrium Fitting
Peng, Q.; Schachter, J.; Schissel, D.P.; Lao, L.L.
1999-06-01
An interactive IDL based wrapper, IEFIT, has been created for the magnetic equilibrium reconstruction code EFIT written in FORTRAN. It allows high temperature fusion physicists to rapidly optimize a plasma equilibrium reconstruction by eliminating the unnecessarily repeated initialization in the conventional approach along with the immediate display of the fitting results of each input variation. It uses a new IDL based graphics package, GaPlotObj, developed in cooperation with Fanning Software Consulting, that provides a unified interface with great flexibility in presenting and analyzing scientific data. The overall interactivity reduces the process to minutes from the usual hours.
Introducing a Bayesian Approach to Determining Degree of Fit With Existing Rorschach Norms.
Giromini, Luciano; Viglione, Donald J; McCullaugh, Joseph
2015-01-01
This article offers a new methodological approach to investigate the degree of fit between an independent sample and 2 existing sets of norms. Specifically, with a new adaptation of a Bayesian method, we developed a user-friendly procedure to compare the mean values of a given sample to those of 2 different sets of Rorschach norms. To illustrate our technique, we used a small, U.S. community sample of 80 adults and tested whether it resembled more closely the standard Comprehensive System norms (CS 600; Exner, 2003), or a recently introduced, internationally based set of Rorschach norms (Meyer, Erdberg, & Shaffer, 2007 ). Strengths and limitations of this new statistical technique are discussed.
Burnham, A K
2006-05-17
Chemical kinetic modeling has been used for many years in process optimization, estimating real-time material performance, and lifetime prediction. Chemists have tended towards developing detailed mechanistic models, while engineers have tended towards global or lumped models. Many, if not most, applications use global models by necessity, since it is impractical or impossible to develop a rigorous mechanistic model. Model fitting acquired a bad name in the thermal analysis community after that community realized a decade after other disciplines that deriving kinetic parameters for an assumed model from a single heating rate produced unreliable and sometimes nonsensical results. In its place, advanced isoconversional methods (1), which have their roots in the Friedman (2) and Ozawa-Flynn-Wall (3) methods of the 1960s, have become increasingly popular. In fact, as pointed out by the ICTAC kinetics project in 2000 (4), valid kinetic parameters can be derived by both isoconversional and model fitting methods as long as a diverse set of thermal histories are used to derive the kinetic parameters. The current paper extends the understanding from that project to give a better appreciation of the strengths and weaknesses of isoconversional and model-fitting approaches. Examples are given from a variety of sources, including the former and current ICTAC round-robin exercises, data sets for materials of interest, and simulated data sets.
The BestFIT trial: A SMART Approach to Developing Individualized Weight Loss Treatments
Sherwood, Nancy E.; Butryn, Meghan L.; Forman, Evan M.; Almirall, Daniel; Seburg, Elisabeth M.; Crain, A Lauren; Kunin-Batson, Alicia S; Hayes, Marcia G.; Levy, Rona L; Jeffery, Robert W.
2016-01-01
Behavioral weight loss programs help people achieve clinically meaningful weight losses (8–10% of starting body weight). Despite data showing that only half of participants achieve this goal, a “one size fits all” approach is normative. This weight loss intervention science gap calls for adaptive interventions that provide the “right treatment at the right time for the right person.” Sequential Multiple Assignment Randomized Trials (SMART), use experimental design principles to answer questions for building adaptive interventions including whether, how, or when to alter treatment intensity, type, or delivery. This paper describes the rationale and design of the BestFIT study, a SMART designed to evaluate the optimal timing for intervening with sub-optimal responders to weight loss treatment and relative efficacy of two treatments that address self-regulation challenges which impede weight loss: 1) augmenting treatment with portion-controlled meals (PCM) which decrease the need for self-regulation; and 2) switching to acceptance-based behavior treatment (ABT) which boosts capacity for self-regulation. The primary aim is to evaluate the benefit of changing treatment with PCM versus ABT. The secondary aim is to evaluate the best time to intervene with sub-optimal responders. BestFIT results will lead to the empirically-supported construction of an adaptive intervention that will optimize weight loss outcomes and associated health benefits. PMID:26825020
Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use
Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil
2013-01-01
The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2015-01-01
Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…
Adaptive BP-Dock: An Induced Fit Docking Approach for Full Receptor Flexibility.
Bolia, Ashini; Ozkan, S Banu
2016-04-25
We present an induced fit docking approach called Adaptive BP-Dock that integrates perturbation response scanning (PRS) with the flexible docking protocol of RosettaLigand in an adaptive manner. We first perturb the binding pocket residues of a receptor and obtain a new conformation based on the residue response fluctuation profile using PRS. Next, we dock a ligand to this new conformation by RosettaLigand, where we repeat these steps for several iterations. We test this approach on several protein test sets including difficult unbound docking cases such as HIV-1 reverse transcriptase and HIV-1 protease. Adaptive BP-Dock results show better correlation with experimental binding affinities compared to other docking protocols. Overall, the results imply that Adaptive BP-Dock can easily capture binding induced conformational changes by simultaneous sampling of protein and ligand conformations. This can provide faster and efficient docking of novel targets for rational drug design.
Aldridge, C.L.; Boyce, M.S.
2007-01-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Aldridge, Cameron L; Boyce, Mark S
2007-03-01
Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel
Johann, C; Garidel, P; Mennicke, L; Blume, A
1996-01-01
A simulation program using least-squares minimization was developed to calculate and fit heat capacity (cp) curves to experimental thermograms of dilute aqueous dispersions of phospholipid mixtures determined by high-sensitivity differential scanning calorimetry. We analyzed cp curves and phase diagrams of the pseudobinary aqueous lipid systems 1,2-dimyristoyl-sn-glycero-3-phosphatidylglycerol/ 1,2-dipalmitoyl-sn-glycero-3phosphatidylcholine (DMPG/DPPC) and 1,2-dimyristoyl-sn-glycero-3-phosphatidic acid/1, 2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DMPA/DPPC) at pH 7. The simulation of the cp curves is based on regular solution theory using two nonideality parameters rho g and rho l for symmetric nonideal mixing in the gel and the liquid-crystalline phases. The broadening of the cp curves owing to limited cooperativity is incorporated into the simulation by convolution of the cp curves calculated for infinite cooperativity with a broadening function derived from a simple two-state transition model with the cooperative unit size n = delta HVH/delta Hcal as an adjustable parameter. The nonideality parameters and the cooperative unit size turn out to be functions of composition. In a second step, phase diagrams were calculated and fitted to the experimental data by use of regular solution theory with four different model assumptions. The best fits were obtained with a four-parameter model based on nonsymmetric, nonideal mixing in both phases. The simulations of the phase diagrams show that the absolute values of the nonideality parameters can be changed in a certain range without large effects on the shape of the phase diagram as long as the difference of the nonideality parameters for rho g for the gel and rho l for the liquid-crystalline phase remains constant. The miscibility in DMPG/DPPC and DMPA/DPPC mixtures differs remarkably because, for DMPG/DPPC, delta rho = rho l -rho g is negative, whereas for DMPA/DPPC this difference is positive. For DMPA/DPPC, this
Reconstruction of Galaxy Star Formation Histories through SED Fitting:The Dense Basis Approach
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric
2017-04-01
We introduce the dense basis method for Spectral Energy Distribution (SED) fitting. It accurately recovers traditional SED parameters, including M *, SFR, and dust attenuation, and reveals previously inaccessible information about the number and duration of star formation episodes and the timing of stellar mass assembly, as well as uncertainties in these quantities. This is done using basis star formation histories (SFHs) chosen by comparing the goodness-of-fit of mock galaxy SEDs to the goodness-of-reconstruction of their SFHs. We train and validate the method using a sample of realistic SFHs at z = 1 drawn from stochastic realizations, semi-analytic models, and a cosmological hydrodynamical galaxy formation simulation. The method is then applied to a sample of 1100 CANDELS GOODS-S galaxies at 1< z< 1.5 to illustrate its capabilities at moderate S/N with 15 photometric bands. Of the six parametrizations of SFHs considered, we adopt linear-exponential, bessel-exponential, log-normal, and Gaussian SFHs, and reject the traditional parametrizations of constant (Top-Hat) and exponential SFHs. We quantify the bias and scatter of each parametrization. 15% of galaxies in our CANDELS sample exhibit multiple episodes of star formation, with this fraction decreasing above {M}* > {10}9.5 {M}ȯ . About 40% of the CANDELS galaxies have SFHs whose maximum occurs at or near the epoch of observation. The dense basis method is scalable and offers a general approach to a broad class of data-science problems.
Fitting additive hazards models for case-cohort studies: a multiple imputation approach.
Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook
2016-07-30
In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Fouesneau, Morgan; Price-Whelan, Adrian M.; Bailer-Jones, Coryn A. L.; Gould, Andy; Rix, Hans-Walter
2017-04-01
Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W1 and W2 bands, using Tycho-Gaia Astrometric Solution (TGAS) parallax measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1σ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.
Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.
Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis
2006-02-01
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/
Carlson, K.W.; Williams, J.A.
1980-02-01
Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension.
Zhang, J George; Ho, Thuy; Callendrello, Alanna L; Clark, Robert J; Santone, Elizabeth A; Kinsman, Sarah; Xiao, Deqing; Fox, Lisa G; Einolf, Heidi J; Stresser, David M
2014-09-01
Cytochrome P450 (P450) induction is often considered a liability in drug development. Using calibration curve-based approaches, we assessed the induction parameters R3 (a term indicating the amount of P450 induction in the liver, expressed as a ratio between 0 and 1), relative induction score, Cmax/EC50, and area under the curve (AUC)/F2 (the concentration causing 2-fold increase from baseline of the dose-response curve), derived from concentration-response curves of CYP3A4 mRNA and enzyme activity data in vitro, as predictors of CYP3A4 induction potential in vivo. Plated cryopreserved human hepatocytes from three donors were treated with 20 test compounds, including several clinical inducers and noninducers of CYP3A4. After the 2-day treatment, CYP3A4 mRNA levels and testosterone 6β-hydroxylase activity were determined by real-time reverse transcription polymerase chain reaction and liquid chromatography-tandem mass spectrometry analysis, respectively. Our results demonstrated a strong and predictive relationship between the extent of midazolam AUC change in humans and the various parameters calculated from both CYP3A4 mRNA and enzyme activity. The relationships exhibited with non-midazolam in vivo probes, in aggregate, were unsatisfactory. In general, the models yielded better fits when unbound rather than total plasma Cmax was used to calculate the induction parameters, as evidenced by higher R(2) and lower root mean square error (RMSE) and geometric mean fold error. With midazolam, the R3 cut-off value of 0.9, as suggested by US Food and Drug Administration guidance, effectively categorized strong inducers but was less effective in classifying midrange or weak inducers. This study supports the use of calibration curves generated from in vitro mRNA induction response curves to predict CYP3A4 induction potential in human. With the caveat that most compounds evaluated here were not strong inhibitors of enzyme activity, testosterone 6β-hydroxylase activity was
Morphing ab initio potential energy curve of beryllium monohydride
NASA Astrophysics Data System (ADS)
Špirko, Vladimír
2016-12-01
Effective (mass-dependent) potential energy curves of the ground electronic states of 9BeH, 9BeD, and 9BeT are constructed by morphing a very accurate MR-ACPF ab initio potential of Koput (2011) within the framework of the reduced potential energy curve approach of Jenč (1983). The morphing is performed by fitting the RPC parameters to available experimental ro-vibrational data. The resulting potential energy curves provide a fairly quantitative reproduction of the fitted data. This allows for a reliable prediction of the so-far unobserved molecular states in terms of only a small number of fitting parameters.
Exploring Person Fit with an Approach Based on Multilevel Logistic Regression
ERIC Educational Resources Information Center
Walker, A. Adrienne; Engelhard, George, Jr.
2015-01-01
The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…
Exploring Person Fit with an Approach Based on Multilevel Logistic Regression
ERIC Educational Resources Information Center
Walker, A. Adrienne; Engelhard, George, Jr.
2015-01-01
The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…
NASA Astrophysics Data System (ADS)
Báldi, András; Kisbenedek, Tibor
1999-03-01
Distribution of orthopterans were studied in 27 steppe patches in the Buda Hills, Hungary. The smallest patches were about 300 m 2, the largest 'continents' were over 100 000 m 2. We collected 692 imagoes of 32 species and 1 201 imagoes of 28 species in July 1992 and July 1993, respectively. We found that the best-fit models for the species-area curves were both the power function and exponential models. The multivariate regression model incorporated area and distance from large patches as significant factors in determining the number of species. The correlation analysis revealed that the elevation and the height of grass vegetation also influenced the distribution of species. We applied three methods for testing whether the distribution of orthopterans was random or not. First, we compared the observed species-area curves with the expected curves. Second, we compared the small-to-large and large-to-small cumulative curves. Finally, we compared the observed species-area curves with the rarefaction curves. All three methods for both years showed that the occurrence of orthopterans in the steppe patches was not random. A collection of small islands harboured more orthopteran species than one or two large patches of the same area.
Variance analysis for model updating with a finite element based subspace fitting approach
NASA Astrophysics Data System (ADS)
Gautier, Guillaume; Mevel, Laurent; Mencik, Jean-Mathieu; Serra, Roger; Döhler, Michael
2017-07-01
Recently, a subspace fitting approach has been proposed for vibration-based finite element model updating. The approach makes use of subspace-based system identification, where the extended observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating the model-based observability matrix with the estimated one, by using a single set of experimental data. Hence, the updated finite element model only reflects this single test case. However, estimates from vibration measurements are inherently exposed to uncertainty due to unknown excitation, measurement noise and finite data length. In this paper, a covariance estimation procedure for the updated model parameters is proposed, which propagates the data-related covariance to the updated model parameters by considering a first-order sensitivity analysis. In particular, this propagation is performed through each iteration step of the updating minimization problem, by taking into account the covariance between the updated parameters and the data-related quantities. Simulated vibration signals are used to demonstrate the accuracy and practicability of the derived expressions. Furthermore, an application is shown on experimental data of a beam.
R-Curve Approach to Describe the Fracture Resistance of Tool Steels
NASA Astrophysics Data System (ADS)
Picas, Ingrid; Casellas, Daniel; Llanes, Luis
2016-06-01
This work addresses the events involved in the fracture of tool steels, aiming to understand the effect of primary carbides, inclusions, and the metallic matrix on their effective fracture toughness and strength. Microstructurally different steels were investigated. It is found that cracks nucleate on carbides or inclusions at stress values lower than the fracture resistance. It is experimentally evidenced that such cracks exhibit an increasing growth resistance as they progressively extend, i.e., R-curve behavior. Ingot cast steels present a rising R-curve, which implies that the effective toughness developed by small cracks is lower than that determined with long artificial cracks. On the other hand, cracks grow steadily in the powder metallurgy tool steel, yielding as a result a flat R-curve. Accordingly, effective toughness for this material is mostly independent of the crack size. Thus, differences in fracture toughness values measured using short and long cracks must be considered when assessing fracture resistance of tool steels, especially when tool performance is controlled by short cracks. Hence, material selection for tools or development of new steel grades should take into consideration R-curve concepts, in order to avoid unexpected tool failures or to optimize microstructural design of tool steels, respectively.
NASA Astrophysics Data System (ADS)
Mikhasenko, Mikhail; Jackura, Andrew; Ketzer, Bernhard; Szczepaniak, Adam
2017-03-01
We derive a unitarized model for the peripheral production of the three-pion system in the isobar approximation. The production process takes into account long-range t-channel pion exchange. The K-matrix approach is chosen for the parameterization of the scattering amplitude. Five coupled channels are used to fit the COMPASS spin-density matrices for the JPCMɛ = 2-+0+ sector. Preliminary results of the fit are presented.
Evan Brooks; Valerie Thomas; Wynne Randolph; John Coulston
2012-01-01
With the advent of free Landsat data stretching back decades, there has been a surge of interest in utilizing remotely sensed data in multitemporal analysis for estimation of biophysical parameters. Such analysis is confounded by cloud cover and other image-specific problems, which result in missing data at various aperiodic times of the year. While there is a wealth...
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio
2011-04-11
Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.
3D Modeling of Spectra and Light Curves of Hot Jupiters with PHOENIX; a First Approach
NASA Astrophysics Data System (ADS)
Jiménez-Torres, J. J.
2016-04-01
A detailed global circulation model was used to feed the PHOENIX code and calculate 3D spectra and light curves of hot Jupiters. Cloud free and dusty radiative fluxes for the planet HD179949b were modeled to show differences between them. The PHOENIX simulations can explain the broad features of the observed 8 μm light curves, including the fact that the planet-star flux ratio peaks before the secondary eclipse. The PHOENIX reflection spectrum matches the Spitzer secondary-eclipse depth at 3.6 μm and underpredicts eclipse depths at 4.5, 5.8 and 8.0 μm. These discrepancies result from the chemical composition and suggest the incorporation of different metallicities in future studies.
NASA Astrophysics Data System (ADS)
Chris, Leong; Yoshiyuki, Yokoo
2017-04-01
Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.
Schumacher, Jonathan A; Scott Reading, N; Szankasi, Philippe; Matynia, Anna P; Kelley, Todd W
2015-08-01
Acute myeloid leukemia patients with recurrent cytogenetic abnormalities including inv(16);CBFB-MYH11 and t(15;17);PML-RARA may be assessed by monitoring the levels of the corresponding abnormal fusion transcripts by quantitative reverse transcription-PCR (qRT-PCR). Such testing is important for evaluating the response to therapy and for the detection of early relapse. Existing qRT-PCR methods are well established and in widespread use in clinical laboratories but they are laborious and require the generation of standard curves. Here, we describe a new method to quantitate fusion transcripts in acute myeloid leukemia by qRT-PCR without the need for standard curves. Our approach uses a plasmid calibrator containing both a fusion transcript sequence and a reference gene sequence, representing a perfect normalized copy number (fusion transcript copy number/reference gene transcript copy number; NCN) of 1.0. The NCN of patient specimens can be calculated relative to that of the single plasmid calibrator using experimentally derived PCR efficiency values. We compared the data obtained using the plasmid calibrator method to commercially available assays using standard curves and found that the results obtained by both methods are comparable over a broad range of values with similar sensitivities. Our method has the advantage of simplicity and is therefore lower in cost and may be less subject to errors that may be introduced during the generation of standard curves.
Arai, Takahide; Lefèvre, Thierry; Hovasse, Thomas; Hayashida, Kentaro; Watanabe, Yusuke; O'Connor, Stephen A; Benamer, Hakim; Garot, Philippe; Cormier, Bertrand; Bouvier, Erik; Morice, Marie-Claude; Chevalier, Bernard
2016-01-15
The aim of this study was to evaluate the learning curve in performing transfemoral TAVI (TF-TAVI). Between October 2006 and October 2013, 312 consecutive TF-TAVI cases performed by 6 interventional cardiologists, using the Edwards Sapien valve and 104 using the CoreValve, were included in the present analysis. Cumulative sum (CUSUM) failure analysis of combined 30-day safety endpoint was used to evaluate learning curves. The CUSUM analysis revealed a learning curve regarding the occurrence of 30-day adverse events with an improvement after the initial 86 cases using the Edwards valve and 40 cases using the CoreValve. We divided the Edwards valve cases into two groups (early experience: Cases 1 to 86; late experience: Cases 87 to 312). The rate of 30-day mortality and 1-year mortality significantly decreased in the late experience group (17% to 7%, p=0.019; 34% to 21%, p=0.035, respectively). We divided the CoreValve cases into two groups (early experience: Cases 1 to 40; late experience: Cases 41 to 104). The rate of 30-day mortality and 1-year mortality significantly decreased in the late experience group (20% to 6%, p=0.033; 38% to 15%, p=0.040, respectively). The groups including both valves were also analyzed after propensity-matching (early [n=52] vs late [n=52]). This model also showed that 30-day and 1-year mortality rates were significantly lower in the late experience group (13% to 1%, p=0.028; 34% to 20%, p=0.042, respectively). An appropriate level of experience is needed to reduce the complication rate and mortality in TF-TAVI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chavanis, Pierre-Henri; Matos, Tonatiuh
2017-01-01
We develop a hydrodynamic representation of the Klein-Gordon-Maxwell-Einstein equations. These equations combine quantum mechanics, electromagnetism, and general relativity. We consider the case of an arbitrary curved spacetime, the case of weak gravitational fields in a static or expanding background, and the nonrelativistic (Newtonian) limit. The Klein-Gordon-Maxwell-Einstein equations govern the evolution of a complex scalar field, possibly describing self-gravitating Bose-Einstein condensates, coupled to an electromagnetic field. They may find applications in the context of dark matter, boson stars, and neutron stars with a superfluid core.
Kim, Wan Wook; Jung, Jin Hyang; Park, Ho Yong
2015-10-01
The purpose of this study was to examine the learning curve for robotic thyroidectomy using a bilateral axillo-breast approach. We examined the first 100 robotic thyroidectomies with central lymph node dissection due to papillary thyroid cancer between April 2010 and August 2011. We evaluated the clinical characteristics, operative time, pathologic data, and complications. Operative time was reduced significantly after 40 cases; therefore, the patients were divided into 2 groups: group A (1 to 40 cases) and group B (41 to 100 cases). The mean operative time in group A (232.6±10.0 min) was longer than that in group B (188.9±6.0 min) with statistical significance (P=0.001). Other data, including characteristics, drainage amount, hospital stay, retrieved lymph nodes, thyroglobulin, and complications, were not different between the 2 groups. The learning curves with lobectomy and total thyroidectomy were reached at the same time. The learning curve for robotic thyroidectomy with central lymph node dissection using bilateral axillo-breast approach was 40 cases for beginner surgeons. Robotic total thyroidectomy was performed effectively and safely after experience with 40 cases, as with lobectomy.
NASA Astrophysics Data System (ADS)
Menafoglio, A.; Guadagnini, A.; Secchi, P.
2016-08-01
We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.
Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test
ERIC Educational Resources Information Center
Garn, Alex; Sun, Haichun
2009-01-01
The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…
2013-01-01
Objective: This article provides a detailed description and evaluation of the next Nucleus® cochlear implant fitting suite. A new fitting methodology is presented that, at its simplest level, requires a single volume adjustment, and at its advanced level, provides access to 22-channel fitting. It is implemented on multiple platforms, including a mobile platform (Remote Assistant Fitting) and an accessible PC application (Nucleus Fitting Software). Additional tools for home care and surgical care are also described. Design: Two trials were conducted, comparing the fitting methodology with the existing Custom Sound™ methodology, as fitted by the recipient and by an experienced cochlear implant audiologist. Study sample: Thirty-seven subjects participated in the trials. Results: No statistically significant differences were observed between the group mean scores, whether fitted by the recipient or by an experienced audiologist. The lower bounds of the 95% confidence intervals of the differences represented clinically insignificant differences. No statistically significant differences were found in the subjective program preferences of the subjects. Conclusions: Equivalent speech perception outcomes were demonstrated when compared to current best practice. As such, the new technology has the potential to expand the capacity of audiological care without compromising efficacy. PMID:23617610
Approach-Avoidance Motivational Profiles in Early Adolescents to the PACER Fitness Test
ERIC Educational Resources Information Center
Garn, Alex; Sun, Haichun
2009-01-01
The use of fitness testing is a practical means for measuring components of health-related fitness, but there is currently substantial debate over the motivating effects of these tests. Therefore, the purpose of this study was to examine the cross-fertilization of achievement and friendship goal profiles for early adolescents involved in the…
A Model-Based Approach to Goodness-of-Fit Evaluation in Item Response Theory
ERIC Educational Resources Information Center
Oberski, Daniel L.; Vermunt, Jeroen K.
2013-01-01
These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…
ERIC Educational Resources Information Center
Pargament, Kenneth I.; Sweeney, Patrick J.
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…
A Model-Based Approach to Goodness-of-Fit Evaluation in Item Response Theory
ERIC Educational Resources Information Center
Oberski, Daniel L.; Vermunt, Jeroen K.
2013-01-01
These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…
ERIC Educational Resources Information Center
Pargament, Kenneth I.; Sweeney, Patrick J.
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated…
Meites, L; Barry, D M
1973-11-01
A new technique for distinguishing diacidic from monoacidic weak bases (or dibasic from monobasic weak acids) is based on fitting the data obtained in a potentiometric acid-base titration to theoretical equations for the titration of a monoacidic base (or monobasic acid). If the substance titrated is not monofunctional the best fit to these equations will involve systematic deviations that, when plotted against the volume of reagent, yield a "deviation pattern" with a shape characteristic of polyfunctional behaviour. Ancillary criteria based on the values of the parameters obtained from the fit are also described. There is a range of uncertainty associated with each of these criteria in which the ratios of successive dissociation constants are so close to the statistical values that it is impossible in the face of the errors of measurement to decide whether the substance is monofunctional or polyfunctional. If the data from one titration prove to lie within that range, the decision may be based on the results of a second titration performed at a different ionic strength. Further fitting to the equations describing more complex behaviour provides a basis for distinguishing non-statistical difunctional substances from trifunctional ones, trifunctional ones from tetrafunctional ones, and so on.
Rather, Manzoor A; Bhat, Bilal A; Qurishi, Mushtaq A
2013-12-15
Natural product based drugs constitute a substantial proportion of the pharmaceutical market particularly in the therapeutic areas of infectious diseases and oncology. The primary focus of any drug development program so far has been to design selective ligands (drugs) that act on single selective disease targets to obtain highly efficacious and safe drugs with minimal side effects. Although this approach has been successful for many diseases, yet there is a significant decline in the number of new drug candidates being introduced into clinical practice over the past few decades. This serious innovation deficit that the pharmaceutical industries are facing is due primarily to the post-marketing failures of blockbuster drugs. Many analysts believe that the current capital-intensive model-"the one drug to fit all" approach will be unsustainable in future and that a new "less investment, more drugs" model is necessary for further scientific growth. It is now well established that many diseases are multi-factorial in nature and that cellular pathways operate more like webs than highways. There are often multiple ways or alternate routes that may be switched on in response to the inhibition of a specific target. This gives rise to the resistant cells or resistant organisms under the specific pressure of a targeted agent, resulting in drug resistance and clinical failure of the drug. Drugs designed to act against individual molecular targets cannot usually combat multifactorial diseases like cancer, or diseases that affect multiple tissues or cell types such as diabetes and immunoinflammatory diseases. Combination drugs that affect multiple targets simultaneously are better at controlling complex disease systems and are less prone to drug resistance. This multicomponent therapy forms the basis of phytotherapy or phytomedicine where the holistic therapeutic effect arises as a result of complex positive (synergistic) or negative (antagonistic) interactions between
Pandurangan, Arun Prasad; Shakeel, Shabih; Butcher, Sarah Jane; Topf, Maya
2014-01-01
Fitting of atomic components into electron cryo-microscopy (cryoEM) density maps is routinely used to understand the structure and function of macromolecular machines. Many fitting methods have been developed, but a standard protocol for successful fitting and assessment of fitted models has yet to be agreed upon among the experts in the field. Here, we created and tested a protocol that highlights important issues related to homology modelling, density map segmentation, rigid and flexible fitting, as well as the assessment of fits. As part of it, we use two different flexible fitting methods (Flex-EM and iMODfit) and demonstrate how combining the analysis of multiple fits and model assessment could result in an improved model. The protocol is applied to the case of the mature and empty capsids of Coxsackievirus A7 (CAV7) by flexibly fitting homology models into the corresponding cryoEM density maps at 8.2 and 6.1 Å resolution. As a result, and due to the improved homology models (derived from recently solved crystal structures of a close homolog – EV71 capsid – in mature and empty forms), the final models present an improvement over previously published models. In close agreement with the capsid expansion observed in the EV71 structures, the new CAV7 models reveal that the expansion is accompanied by ∼5° counterclockwise rotation of the asymmetric unit, predominantly contributed by the capsid protein VP1. The protocol could be applied not only to viral capsids but also to many other complexes characterised by a combination of atomic structure modelling and cryoEM density fitting. PMID:24333899
Yap, John Stephen; Wang, Chenguang; Wu, Rongling
2007-06-20
Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL) with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm.
ROC-Curve Approach for Determining the Detection Limit of a Field Chemical Sensor
Fraga, Carlos G.; Melville, Angela M.; Wright, Bob W.
2007-01-31
The detection limit of a field chemical sensor under realistic operating conditions is determined by receiver operator characteristic (ROC) curves. The chemical sensor is an ion mobility spectrometry (IMS) device used to detect a chemical marker in diesel fuel. The detection limit is the lowest concentration of the marker in diesel fuel that obtains the desired true-positive probability (TPP) and false-positive probability (FPP). A TPP of 0.90 and a FPP of 0.10 were selected as acceptable levels for the field sensor in this study. The detection limit under realistic operating conditions is found to be between to 2 to 4 ppm (w/w). The upper value is the detection limit under challenging conditions. The ROC-based detection limit is very reliable because it is determined from multiple and repetitive sensor analyses under realistic circumstances. ROC curves also clearly illustrate and gauge the effects data preprocessing and sampling environments have on the sensor’s detection limit.
Yap, John Stephen; Wang, Chenguang; Wu, Rongling
2007-01-01
Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL) with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm. PMID:17579725
Srivastava, Aneesh; Sureka, Sanjoy Kumar; Vashishtha, Saurabh; Agarwal, Shikhar; Ansari, Md Saleh; Kumar, Manoj
2016-01-01
CONTEXT: The retroperitoneoscopic or retroperitoneal (RP) surgical approach has not become as popular as the transperitoneal (TP) one due to the steeper learning curve. AIMS: Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS) performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. SETTINGS AND DESIGN: This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. PATIENTS AND METHODS: Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC). RESULTS: RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP); 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5%) had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. CONCLUSION: Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve. PMID:27073300
Srivastava, Aneesh; Sureka, Sanjoy Kumar; Vashishtha, Saurabh; Agarwal, Shikhar; Ansari, Md Saleh; Kumar, Manoj
2016-01-01
The retroperitoneoscopic or retroperitoneal (RP) surgical approach has not become as popular as the transperitoneal (TP) one due to the steeper learning curve. Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS) performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC). RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP); 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5%) had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
Iannella, Giannicola; Magliulo, Giuseppe
2016-10-01
Analyze the surgical outcomes of endoscopic stapes surgery, comparing the results with a conventional stapes surgery under microscopic approach. Estimate the operation type of each surgical approach and show a learning curve of endoscopic stapes surgery. Retrospective study. Tertiary referral center. Twenty patients who underwent endoscopic stapedotomy for otosclerosis and 20 patients who underwent microscopic stapedotomy for otosclerosis. Endoscopic and microscopic stapes surgery. Operating time, preoperative and postoperative hearing, intraoperative findings, postoperative complications, and postoperative pain. The group of patients who underwent endoscopic stapes surgery showed a mean operative time calculated to be 45.0 min. The group of patients treated by microscopic approach had an estimated mean value of 36.5 min. Statistical difference was evident (p value = 0.01). The average duration of endoscopic surgery varied as the surgeon gained experience. There were no statistical differences between the average surgical times for the endoscopic and microscopic approaches (p >0.05) in the last 4-month period of surgery. Through the endoscopic approach the percentage of ears with a postoperative air-bone gap ≤20 dB was 95%. No difference from the percentage of the microscopic group (90%) (p >0.05) was reported. No difference regarding the incidence of intraoperative findings and postoperative complications between endoscopic and microscopic approaches was found. Audiological outcomes achieved by endoscopic surgery are similar to the results obtained through a microscopic approach. Longer initial operative times and a learning curve are the principal grounds that might discourage most ear-surgeons from commencing endoscopic stapes surgery.
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
Han, Yunkun; Han, Zhanwen E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús
2017-04-01
Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples.
O(N)-Scalar Model in Curved Spacetime with Boundaries: A Renormalization Group Approach
NASA Astrophysics Data System (ADS)
Brevik, I.; Odintsov, S. D.
1996-04-01
We discuss the volume and surface running couplings forO(N) scalar theory in curved spacetime with boundaries. The IR limit of the theory-in which it becomes asymptotically conformally invariant-is studied, and the existence of IR fixed points for all couplings (also inD=4-ɛdimensions) is shown. ForN=4 the behaviour of some gravitational couplings in the IR limit is changing qualitatively, from a growth forN<=4 to a decrease forN>4. The non-local renormalization group (RG) improved effective action, account being taken of the boundary terms, is found. ForO(N) scalar theory and for scalar electrodynamics, the RG improved effective action in the spherical cap is constructed. The relevance of surface effects for the effective equations of motion for the spherical cap is considered (which may be important in quantum cosmology). Some preliminary remarks on the connection with Casimir theory are also given.
Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe
2017-06-26
In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.
Searching events in AFM force-extension curves: A wavelet approach.
Benítez, R; Bolós, V J
2017-01-01
An algorithm, based on the wavelet scalogram energy, for automatically detecting events in force-extension AFM force spectroscopy experiments is introduced. The events to be detected are characterized by a discontinuity in the signal. It is shown how the wavelet scalogram energy has different decay rates at different points depending on the degree of regularity of the signal, showing faster decay rates at regular points and slower rates at singular points (jumps). It is shown that these differences produce peaks in the scalogram energy plot at the event points. Finally, the algorithm is illustrated in a tether analysis experiment by using it for the detection of events in the AFM force-extension curves susceptible to being considered tethers. Microsc. Res. Tech. 80:153-159, 2017. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Guo, Feng; Zhang, Hong; Hu, Hai-Quan; Cheng, Xin-Lu; Zhang, Li-Yan
2015-11-01
We investigate the Hugoniot curve, shock-particle velocity relations, and Chapman-Jouguet conditions of the hot dense system through molecular dynamics (MD) simulations. The detailed pathways from crystal nitromethane to reacted state by shock compression are simulated. The phase transition of N2 and CO mixture is found at about 10 GPa, and the main reason is that the dissociation of the C-O bond and the formation of C-C bond start at 10.0-11.0 GPa. The unreacted state simulations of nitromethane are consistent with shock Hugoniot data. The complete pathway from unreacted to reacted state is discussed. Through chemical species analysis, we find that the C-N bond breaking is the main event of the shock-induced nitromethane decomposition. Project supported by the National Natural Science Foundation of China (Grant No. 11374217) and the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014BQ008).
Factors influencing community health centers' efficiency: a latent growth curve modeling approach.
Marathe, Shriram; Wan, Thomas T H; Zhang, Jackie; Sherin, Kevin
2007-10-01
The objective of study is to examine factors affecting the variation in technical and cost efficiency of community health centers (CHCs). A panel study design was formulated to examine the relationships among the contextual, organizational structural, and performance variables. Data Envelopment Analysis (DEA) of technical efficiency and latent growth curve modeling of multi-wave technical and cost efficiency were performed. Regardless of the efficiency measures, CHC efficiency was influenced more by contextual factors than organizational structural factors. The study confirms the independent and additive influences of contextual and organizational predictors on efficiency. The change in CHC technical efficiency positively affects the change in CHC cost efficiency. The practical implication of this finding is that healthcare managers can simultaneously optimize both technical and cost efficiency through appropriate use of inputs to generate optimal outputs. An innovative solution is to employ decision support software to prepare an expert system to assist poorly performing CHCs to achieve better cost efficiency through optimizing technical efficiency.
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Lee, M. G.
1985-01-01
The navigation and flight director guidance systems implemented in the NASA/FAA helicopter microwave landing system (MLS) curved approach flight test program is described. Flight test were conducted at the U.S. Navy's Crows Landing facility, using the NASA Ames UH-lH helicopter equipped with the V/STOLAND avionics system. The purpose of these tests was to investigate the feasibility of flying complex, curved and descending approaches to a landing using MLS flight director guidance. A description of the navigation aids used, the avionics system, cockpit instrumentation and on-board navigation equipment used for the flight test is provided. Three generic reference flight paths were developed and flown during the test. They were as follows: U-Turn, S-turn and Straight-In flight profiles. These profiles and their geometries are described in detail. A 3-cue flight director was implemented on the helicopter. A description of the formulation and implementation of the flight director laws is also presented. Performance data and analysis is presented for one pilot conducting the flight director approaches.
Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study
NASA Technical Reports Server (NTRS)
Scanlon, Charles H.
1987-01-01
A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.
Curved descending landing approach guidance and control. M.S. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Crawford, D. J.
1974-01-01
Linear optimal regulator theory is applied to a nonlinear simulation of a transport aircraft performing a helical landing approach. A closed form expression for the quasi-steady nominal flight path is presented along with the method for determining the corresponding constant nominal control inputs. The Jacobian matrices and the weighting matrices in the cost functional are time varying. A method of solving for the optimal feedback gains is reviewed. The control system is tested on several alternative landing approaches using both three and six degree flight path angles. On each landing approach, the aircraft was subjected to large random initial state errors and to randomly directed crosswinds. The system was also tested for sensitivity to changes in the parameters of the aircraft and of the atmosphere. Performance of the optimal controller on all the three degree approaches was very good, and the control system proved to be reasonably insensitive to parametric uncertainties.
Keilmann, Annerose M; Bohnert, Andrea M; Gosepath, Jan; Mann, Wolf J
2009-12-01
More and more patients with residual hearing on the contralateral side are becoming candidates for cochlear implants (CI) surgery due to increasing CI. The major benefits of regular binaural hearing are spatial hearing, localization, and signal source discrimination in both quiet and noisy surroundings. In most of the reports, hearing aid fitting was carried out without balancing both the devices. Twelve children and eight adults with residual hearing on the non-operated side were binaurally fitted. Our fitting procedure for the hearing aid was based on the desired sensation level [i/o] method. A loudness scaling was used to adjust the loudness perception monaurally and to balance the volume of both devices. Speech audiometry in quiet and noisy surroundings was conducted both monaurally and in the bimodal mode. The fitting was modified according to the respective test results. In all children and six adults, a measurable gain and/or a subjective improvement of speech perception was achieved. Two adult patients did not accept the new fitting. In seven younger children, loudness scaling was impossible to perform because of age. This was also the case with speech audiometry for two children. A structured bimodal fitting using loudness scaling for both the cochlear implant and the hearing aid results in a subjective and objective amelioration of the patient's hearing and speech perception.
NASA Astrophysics Data System (ADS)
Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; Young, L. A.; Stern, S. A.
2017-05-01
Light curves produced from color observations taken during New Horizons' approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5° to 15.1°, sub-observer latitude of 51.2 °N to 51.5 °N, and a sub-solar latitude of 41.2°N. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.
2010-04-21
density, heat of combustion , and aromatic content. 2 Previous analysis of RP- 1 has shown the fuel to be a complex mixture of compounds including...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 21-04-2010 2 . REPORT TYPE...of RP- 1 and RP- 2 with the Advanced Distillation Curve Approach 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Bret C. Windom, Tara
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting.
SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH
Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook
2012-04-10
The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority ({approx}90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.
Pediatric art preferences: countering the "one-size-fits-all" approach.
Nanda, Upali; Chanaud, Cheryl M; Brown, Linda; Hart, Robyn; Hathorn, Kathy
2009-01-01
three operational stages, so one should be careful before using the "one-size-fits-all" approach. Child art, typically used in pediatric wards, is better suited for younger children than for older children.
Effect of Motion Cues During Complex Curved Approach and Landing Tasks - A Piloted Simulation Study
1987-12-01
electromechanical flight displays. The test matrix 5. Comstock. James R., Jr.: Oculometric Indices of Simu- contained three approach paths of high...horizon Slant Vertical- range speed DME Horizontal indicotor situation Localizer needle indicato and Along bank trac DM Figure 13. Oculometrically
Gabrieli, Andrea; Sant, Marco; Demontis, Pierfranco; Suffritti, Giuseppe B
2015-08-11
Two major improvements to the state-of-the-art Repeating Electrostatic Potential Extracted Atomic (REPEAT) method, for generating accurate partial charges for molecular simulations of periodic structures, are here developed. The first, D-REPEAT, consists in the simultaneous fit of the electrostatic potential (ESP), together with the total dipole fluctuations (TDF) of the framework. The second, M-REPEAT, allows the fit of multiple ESP configurations at once. When both techniques are fused into one, DM-REPEAT method, the resulting charges become remarkably stable over a large set of fitting regions, giving a robust and physically sound solution to the buried atoms problem. The method capabilities are extensively studied in ZIF-8 framework, and subsequently applied to IRMOF-1 and ITQ-29 crystal structures. To our knowledge, this is the first time that this approach is proposed in the context of periodic systems.
Scherf, Fanny W A C; Arnold, Laure P
2014-11-01
The results show that the fitting of a contralateral hearing aid (HA) in the non-implanted ear of cochlear implant (CI) recipients is now well established as standard clinical practice. However, there is a lack of experience in HA fitting within the CI centres and the use of published bimodal fitting procedures is poor. The HA is often not refitted after CI switch-on and this may contribute to rejection. Including a bimodal fitting prescription and process in the CI fitting software would make applying a balancing procedure easier and may increase its implementation in routine clinical practice. This survey was designed to investigate and understand the current approach to bimodal fitting of HAs and CIs across different countries and the recommendations made to recipients. Clinicians working with HAs and/or CIs were invited to participate in an international multicentre clinical survey, designed to obtain information on the various approaches towards bimodal hearing and CI and HA device fitting. Forty-one questions were presented to clinicians in experienced CI centres across a range of countries and answers were collected via an online survey. In all, 65 responses were obtained from 12 different countries. All clinicians said they would advise a CI user to wear a contralateral HA if indicated. However, a significant number (45%) had either never fitted HAs before or had less than 1 year of experience. In general, there were no specific criteria for selecting candidates to fit with an HA. A strategy to balance the HA with the CI was not used as a standard practice for any of the adults and was used in only 12% of the children. Only half the respondents were aware of the bimodal literature. The majority of professionals (18/30) did not refit the HA after CI switch-on. However, if users complained of sound quality or loudness issues or had poor test results, a follow-up session was provided. The main benefit reported by recipients was improvement in overall sound
NASA Astrophysics Data System (ADS)
Zhou, Yuhong
Light detection and ranging (LiDAR) waveform data have been increasingly available for performing land cover classification. To utilize waveforms, numerous studies have focused on either discretizing waveforms into multiple returns or extracting metrics from waveforms to characterize their shapes. The direct use of the waveform curve itself, which contains more comprehensive and accurate information on the vertical structure of ground features, has been scarcely investigated. The first objective of this study was to utilize the complete waveform curve directly to differentiate among objects having distinct vertical structures using different curve matching approaches. Six curve matching approaches were developed, including curve root sum squared differential area (CRSSDA), curve angle mapper (CAM), Kolmogorov-Smirnov (KS) distance, Kullback-Leibler (KL) divergence, cumulative curve root sum squared differential area (CRSSDA), and cumulative curve angle mapper (CCAM) to quantify the similarity between two full-waveforms. To evaluate the performances of curve matching approaches, a widely adopted metrics-based method was also implemented. The second objective of this study was to further incorporate spectral information from hyperspatial resolution imagery with waveform from LiDAR at the object level to achieve a more detailed land cover classification using the same curve matching approaches. To fuse LiDAR waveform and image objects, object-level pseudo-waveforms were first synthesized using discrete-return LiDAR data and then fused with the object-level spectral histograms from hyperspatial resolution WorldView-2 imagery to classify image objects using one of the curve matching approaches. Results showed that the use of the full-waveform curve to discriminate between objects with distinct vertical structures over level terrain provided an alternative to existing metrics-based methods using a limited number of parameters derived from the waveforms. By taking the
Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach
Hawkley, Louise C.; Capitanio, John P.
2015-01-01
Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship—as indexed by perceived social isolation (i.e. loneliness)—with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary–adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a ‘lonely’ phenotype in social species, and its consequences for health and fitness. PMID:25870400
A bootstrap approach to evaluating person and item fit to the Rasch model.
Wolfe, Edward W
2013-01-01
Historically, rule-of-thumb critical values have been employed for interpreting fit statistics that depict anomalous person and item response patterns in applications of the Rasch model. Unfortunately, prior research has shown that these values are not appropriate in many contexts. This article introduces a bootstrap procedure for identifying reasonable critical values for Rasch fit statistics and compares the results of that procedure to applications of rule-of-thumb critical values for three example datasets. The results indicate that rule-of-thumb values may over- or under-identify the number of misfitting items or persons.
Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José
2016-10-01
In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns.
Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André
2015-07-07
The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.
Boerrigter-Eenling, Rita; Alewijn, Martin; Weesepoel, Yannick; van Ruth, Saskia
2017-04-01
Fresh/chilled chicken breasts retail at a higher price than their frozen/thawed counterparts. Verification of the fresh/thawed status of chicken meat is determined by measuring β-hydroxyacyl-Coenzyme A-hydrogenase (HADH) activity present in meat intra-cellular liquids spectrophotometrically. However, considerable numbers of reference samples are required for the current arithmetic method, adding to laboratory costs. Therefore, two alternative mathematical approaches which do not require such reference samples were developed and evaluated: curve fitting and multivariate classification. The approaches were developed using 55 fresh/thawed fillet samples. The performance of the methods was examined by an independent validation set which consisted of 16 samples. Finally, the approach was tested in practice in a market study. With the exception of two minor false classifications, both newly proposed methods performed equally well as the classical method. All three methods were able to identify two apparent fraudulent cases in the market study. Therefore, the experiments showed that the costs of HADH measurements can be reduced by adapting alternative mathematics.
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-06-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The aim of this study was to perform a visual analysis of this relationship with self-organizing maps and to monitor the subject's evolution during the 4 years of secondary school. Four hundred and forty-four students participated in the study. The physical activity and physical fitness of the participants were measured, and the participants' grade point averages were obtained from the five participant institutions. Four main clusters representing two primary student profiles with few differences between boys and girls were observed. The clustering demonstrated that students with higher energy expenditure and better physical fitness exhibited lower body mass index (BMI) and higher academic performance, whereas those adolescents with lower energy expenditure exhibited worse physical fitness, higher BMI and lower academic performance. With respect to the evolution of the students during the 4 years, ∼25% of the students originally clustered in a negative profile moved to a positive profile, and there was no movement in the opposite direction. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach
ERIC Educational Resources Information Center
Perkmen, Serkan; Cevik, Beste; Alkan, Mahir
2012-01-01
Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…
ERIC Educational Resources Information Center
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-01-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…
[Methodic approaches to determining the level of occupational fitness in jobs prone to trauma].
Iushkova, O I; Matiukhin, V V; Poroshenko, A S; Iampol'skaia, E G
2006-01-01
The article deals with main methods to evaluate functional state of various body systems and with the methods-based criteria of occupational fitness for miscellaneous activities. Examination of 11 types of jobs prone to trauma helped to specify integral parameter for occupational selection.
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
Hydrothermal germination models: comparison of two data-fitting approaches with probit optimization
USDA-ARS?s Scientific Manuscript database
Probit models for estimating hydrothermal germination rate yield model parameters that have been associated with specific physiological processes. The desirability of linking germination response to seed physiology must be weighed against expectations of model fit and the relative accuracy of predi...
Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach
ERIC Educational Resources Information Center
Bjerke, Wendy
2013-01-01
The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…
Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach
ERIC Educational Resources Information Center
Bjerke, Wendy
2013-01-01
The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…
ERIC Educational Resources Information Center
Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis
2015-01-01
The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…
Lu, Yehu; Song, Guowen; Li, Jun
2014-11-01
The garment fit played an important role in protective performance, comfort and mobility. The purpose of this study is to quantify the air gap to quantitatively characterize a three-dimensional (3-D) garment fit using a 3-D body scanning technique. A method for processing of scanned data was developed to investigate the air gap size and distribution between the clothing and human body. The mesh model formed from nude and clothed body was aligned, superimposed and sectioned using Rapidform software. The air gap size and distribution over the body surface were analyzed. The total air volume was also calculated. The effects of fabric properties and garment size on air gap distribution were explored. The results indicated that average air gap of the fit clothing was around 25-30 mm and the overall air gap distribution was similar. The air gap was unevenly distributed over the body and it was strongly associated with the body parts, fabric properties and garment size. The research will help understand the overall clothing fit and its association with protection, thermal and movement comfort, and provide guidelines for clothing engineers to improve thermal performance and reduce physiological burden. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Pre-Service Music Teachers' Satisfaction: Person-Environment Fit Approach
ERIC Educational Resources Information Center
Perkmen, Serkan; Cevik, Beste; Alkan, Mahir
2012-01-01
Guided by three theoretical frameworks in vocational psychology, (i) theory of work adjustment, (ii) two factor theory, and (iii) value discrepancy theory, the purpose of this study was to investigate Turkish pre-service music teachers' values and the role of fit between person and environment in understanding vocational satisfaction. Participants…
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
Computerized detection of retina blood vessel using a piecewise line fitting approach
NASA Astrophysics Data System (ADS)
Gu, Suicheng; Zhen, Yi; Wang, Ningli; Pu, Jiantao
2013-03-01
Retina vessels are important landmarks in fundus images, an accurate segmentation of the vessels may be useful for automated screening for several eye diseases or systematic diseases, such as diebetes. A new method is presented for automated segmentation of blood vessels in two-dimensional color fundus images. First, a coherence filter and a followed mean filter are applied to the green channel of the image. The green channel is selected because the vessels have the maximal contrast at the green channel. The coherence filter is to enhance the line strength of the original image and the mean filter is to discard the intensity variance among different regions. Since the vessels are darker than the around tissues depicted on the image, the pixels with small intensity are then retained as points of interest (POI). A new line fitting algorithm is proposed to identify line-like structures in each local circle of the POI. The proposed line fitting method is less sensitive to noise compared to the least squared fitting. The fitted lines with higher scores are regarded as vessels. To evaluate the performance of the proposed method, a public available database DRIVE with 20 test images is selected for experiments. The mean accuracy on these images is 95.7% which is comparable to the state-of-art.
Robust statistical approaches for local planar surface fitting in 3D laser scanning data
NASA Astrophysics Data System (ADS)
Nurunnabi, Abdul; Belton, David; West, Geoff
2014-10-01
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature
Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene
2011-10-28
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.
A learning curve approach to projecting cost and performance for photovoltaic technologies
NASA Astrophysics Data System (ADS)
Cody, George D.; Tiedje, Thomas
1997-04-01
The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solarex of projected sales of PV power at rates significantly less than the US utility average.
Learning curve approach to projecting cost and performance for photovoltaic technologies
NASA Astrophysics Data System (ADS)
Cody, George D.; Tiedje, Thomas
1997-10-01
The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solar of projected sales of PV power at rates significantly less than the U.S. utility average.
Smith, Justin D.; Van Ryzin, Mark J.; Fowler, J. Christopher; Handler, Leonard
2013-01-01
In a modest body of research, personality functioning assessed via performance-based instruments has been found to validly predict treatment outcome and, to some extent, differential response to treatment. However, state-of-the-science longitudinal and mixture modeling techniques, which are common in many areas of clinical psychology, have rarely been used. In this article, we compare multilevel growth curve modeling (MLM) and latent class growth modeling (LCGM) approaches with the same dataset to illustrate the different research questions that can be addressed by each method. Global Assessment of Functioning (GAF) scores collected at six points during the course of a long-term multimodal inpatient treatment of 58 severely and persistently mentally ill adults were used to model the trajectory of treatment outcome. Pretreatment personality functioning and other markers of psychiatric severity were examined as covariates in each modeling approach. The results of both modeling approaches generally indicated that more psychologically impaired clients responded less favorably to treatment. The LCGM approach revealed two unique trajectories of improvement (a persistently low group and a higher starting, improving group). Personality functioning and baseline psychiatric variables significantly predicted group membership and the rate of change within the groups. A side-by-side examination of these two methods was found to be useful in predicting differential treatment response with personality functioning variables. PMID:24066712
Curved Finite Elements and Curve Approximation
NASA Technical Reports Server (NTRS)
Baart, M. L.
1985-01-01
The approximation of parameterized curves by segments of parabolas that pass through the endpoints of each curve segment arises naturally in all quadratic isoparametric transformations. While not as popular as cubics in curve design problems, the use of parabolas allows the introduction of a geometric measure of the discrepancy between given and approximating curves. The free parameters of the parabola may be used to optimize the fit, and constraints that prevent overspill and curve degeneracy are introduced. This leads to a constrained optimization problem in two varibles that can be solved quickly and reliably by a simple method that takes advantage of the special structure of the problem. For applications in the field of computer-aided design, the given curves are often cubic polynomials, and the coefficient may be calculated in closed form in terms of polynomial coefficients by using a symbolic machine language so that families of curves can be approximated with no further integration. For general curves, numerical quadrature may be used, as in the implementation where the Romberg quadrature is applied. The coefficient functions C sub 1 (gamma) and C sub 2 (gamma) are expanded as polynomials in gamma, so that for given A(s) and B(s) the integrations need only be done once. The method was used to find optimal constrained parabolic approximation to a wide variety of given curves.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1980-01-01
Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1981-01-01
Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.
A statistical physics approach to learning curves for the inverse Ising problem
NASA Astrophysics Data System (ADS)
Bachschmid-Romano, Ludovica; Opper, Manfred
2017-06-01
Using methods of statistical physics, we analyse the error of learning couplings in large Ising models from independent data (the inverse Ising problem). We concentrate on learning based on local cost functions, such as the pseudo-likelihood method for which the couplings are inferred independently for each spin. Assuming that the data are generated from a true Ising model, we compute the reconstruction error of the couplings using a combination of the replica method with the cavity approach for densely connected systems. We show that an explicit estimator based on a quadratic cost function achieves minimal reconstruction error, but requires the length of the true coupling vector as prior knowledge. A simple mean field estimator of the couplings which does not need such knowledge is asymptotically optimal, i.e. when the number of observations is much larger than the number of spins. Comparison of the theory with numerical simulations shows excellent agreement for data generated from two models with random couplings in the high temperature region: a model with independent couplings (Sherrington-Kirkpatrick model), and a model where the matrix of couplings has a Wishart distribution.
NASA Astrophysics Data System (ADS)
Duckstein, L.; Bobée, B.; Ashkar, F.
1991-09-01
The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.
Healthy Lifestyle Fitness Camp: A Summer Approach to Prevent Obesity in Low-Income Youth.
George, Gretchen Lynn; Schneider, Constance; Kaiser, Lucia
2016-03-01
To examine the effect of participation in a summer camp focused on nutrition and fitness among low-income youth. In 2011-2012, overweight and obese youth (n = 126) from Fresno, CA participated in a free 6-week summer program, Healthy Lifestyle Fitness Camp (HLFC), which included 3 h/wk of nutrition education provided by University of California CalFresh and 3 hours of daily physical activity through Fresno Parks and Recreation. The researchers used repeated-measures ANOVA to examine changes in weight, waist circumference, and waist-to-height ratio (WHtR) between HLFC and the comparison group (n = 29). Significant pre-post WHtR reductions were observed in HLFC: 0.64 to 0.61 (P < .001). In addition, WHtR reductions were maintained in HLFC 2 months afterward whereas an increase occurred in the comparison group (P < .007). Understanding the impact of nutrition- and fitness-themed summer camps during unstructured months of summer is integral to obesity prevention among low-income youth. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-03-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach.
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-01-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach. PMID:28338056
Lloyd, Graeme T
2012-02-23
Modelling has been underdeveloped with respect to constructing palaeobiodiversity curves, but it offers an additional tool for removing sampling from their estimation. Here, an alternative to subsampling approaches, which often require large sample sizes, is explored by the extension and refinement of a pre-existing modelling technique that uses a geological proxy for sampling. Application of the model to the three main clades of dinosaurs suggests that much of their diversity fluctuations cannot be explained by sampling alone. Furthermore, there is new support for a long-term decline in their diversity leading up to the Cretaceous-Paleogene (K-Pg) extinction event. At present, use of this method with data that includes either Lagerstätten or 'Pull of the Recent' biases is inappropriate, although partial solutions are offered.
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
AGNfitter: A Bayesian MCMC Approach to Fitting Spectral Energy Distributions of AGNs
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph F.; Hogg, David W.
2016-12-01
We present AGNfitter, a publicly available open-source algorithm implementing a fully Bayesian Markov Chain Monte Carlo method to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGNs) from the sub-millimeter to the UV, allowing one to robustly disentangle the physical processes responsible for their emission. AGNfitter makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the nuclear and host galaxy emission simultaneously. The model consists of four physical emission components: an accretion disk, a torus of AGN heated dust, stellar populations, and cold dust in star-forming regions. AGNfitter determines the posterior distributions of numerous parameters that govern the physics of AGNs with a fully Bayesian treatment of errors and parameter degeneracies, allowing one to infer integrated luminosities, dust attenuation parameters, stellar masses, and star-formation rates. We tested AGNfitter’s performance on real data by fitting the SEDs of a sample of 714 X-ray selected AGNs from the XMM-COSMOS survey, spectroscopically classified as Type1 (unobscured) and Type2 (obscured) AGNs by their optical-UV emission lines. We find that two independent model parameters, namely the reddening of the accretion disk and the column density of the dusty torus, are good proxies for AGN obscuration, allowing us to develop a strategy for classifying AGNs as Type1 or Type2, based solely on an SED-fitting analysis. Our classification scheme is in excellent agreement with the spectroscopic classification, giving a completeness fraction of ˜ 86 % and ˜ 70 % , and an efficiency of ˜ 80 % and ˜ 77 % , for Type1 and Type2 AGNs, respectively.
The GODFIT Direct Fitting Algorithm: A New Approach for Total Ozone Retrieval From GOME
NASA Astrophysics Data System (ADS)
Spurr, R. J.; van Roozendael, M.; Lambert, J.; Fayt, C.
2004-05-01
We present a new Direct Fitting algorithm (GODFIT) for the retrieval of total ozone amounts from nadir viewing remote sensing spectrometers (such as GOME, SCIAMACHY, OMI and GOME-2) which take earthshine measurements in the UV ozone Huggins bands. The algorithm is designed for direct comparison with measurements, and all radiative transfer (RT) calculations are done from scratch. We use the linearized RT model LIDORT, which has a single-call facility for simultaneous computations of radiances and fast analytic calculations of Jacobians with respect to surface and atmospheric properties. RT calculations require an input profile of ozone partial columns; we use a column-classified ozone profile climatology (the TOMS Version 8 data set) which provides a unique map between the fitted total column and the input RT profile. To compensate for lack of knowledge of tropospheric aerosol, we perform calculations in a Rayleigh atmosphere and fit for the surface albedo as an internal closure parameter; the algorithm is less sensitive to the presence of aerosol than DOAS-AMF algorithms customarily used for this retrieval. The Ring effect is important in the UV, and GODFIT contains a new treatment for the correction of interference effects due to the filling-in of ozone molecular features by inelastic rotational Raman scattering. The algorithm is flexible and direct, and operates without the need for extensive look-up tables. The algorithm was applied to a subset of some 2000 GOME orbits used in validation studies for the total ozone product. The algorithm can process one orbit (~2000 scenes) in under half an hour. Results were compared with ground data from a well-documented network of surface stations, with TOMS total ozone measurements (Version 8), and also with GOME-derived columns from the latest version of the GDP (operational GOME Data Processor DOAS-type total ozone algorithm). With the new results, previously observed seasonality and solar angle dependencies are greatly
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Computer-aided fit testing: an approach for examining the user/equipment interface
NASA Astrophysics Data System (ADS)
Corner, Brian D.; Beecher, Robert M.; Paquette, Steven
1997-03-01
Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.
Pargament, Kenneth I; Sweeney, Patrick J
2011-01-01
This article describes the development of the spiritual fitness component of the Army's Comprehensive Soldier Fitness (CSF) program. Spirituality is defined in the human sense as the journey people take to discover and realize their essential selves and higher order aspirations. Several theoretically and empirically based reasons are articulated for why spirituality is a necessary component of the CSF program: Human spirituality is a significant motivating force, spirituality is a vital resource for human development, and spirituality is a source of struggle that can lead to growth or decline. A conceptual model developed by Sweeney, Hannah, and Snider (2007) is used to identify several psychological structures and processes that facilitate the development of the human spirit. From this model, an educational, computer-based program has been developed to promote spiritual resilience. This program consists of three tiers: (a) building awareness of the self and the human spirit, (b) building awareness of resources to cultivate the human spirit, and (c) building awareness of the human spirit of others. Further research will be needed to evaluate the effectiveness of this innovative and potentially important program.
Inward leakage variability between respirator fit test panels - Part II. Probabilistic approach.
Liu, Yuewei; Zhuang, Ziqing; Coffey, Christopher C; Rengasamy, Samy; Niezgoda, George
2016-08-01
This study aimed to quantify the variability between different anthropometric panels in determining the inward leakage (IL) of N95 filtering facepiece respirators (FFRs) and elastomeric half-mask respirators (EHRs). We enrolled 144 experienced and non-experienced users as subjects in this study. Each subject was assigned five randomly selected FFRs and five EHRs, and performed quantitative fit tests to measure IL. Based on the NIOSH bivariate fit test panel, we randomly sampled 10,000 pairs of anthropometric 35 and 25 member panels without replacement from the 144 study subjects. For each pair of the sampled panels, a Chi-Square test was used to test the hypothesis that the passing rates for the two panels were not different. The probability of passing the IL test for each respirator was also determined from the 20,000 panels and by using binomial calculation. We also randomly sampled 500,000 panels with replacement to estimate the coefficient of variation (CV) for inter-panel variability. For both 35 and 25 member panels, the probability that passing rates were not significantly different between two randomly sampled pairs of panels was higher than 95% for all respirators. All efficient (passing rate ≥80%) and inefficient (passing rate ≤60%) respirators yielded consistent results (probability >90%) for two randomly sampled panels. Somewhat efficient respirators (passing rate between 60% and 80%) yielded inconsistent results. The passing probabilities and error rates were found to be significantly different between the simulation and binomial calculation. The CV for the 35-member panel was 16.7%, which was slightly lower than that for the 25-member panel (19.8%). Our results suggested that IL inter-panel variability exists, but is relatively small. The variability may be affected by passing level and passing rate. Facial dimension-based fit test panel stratification was also found to have significant impact on inter-panel variability, i.e., it can reduce alpha
NASA Astrophysics Data System (ADS)
Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray
2007-01-01
Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic "two-state" experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error.
NASA Astrophysics Data System (ADS)
Staśkiewicz, B.; Okrasiński, W.
2012-04-01
We propose a simple analytical form of the vapor-liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described.
Roth, Arnd; Häusser, Michael
2010-01-01
Cerebellar Purkinje cells display complex intrinsic dynamics. They fire spontaneously, exhibit bistability, and via mutual network interactions are involved in the generation of high frequency oscillations and travelling waves of activity. To probe the dynamical properties of Purkinje cells we measured their phase response curves (PRCs). PRCs quantify the change in spike phase caused by a stimulus as a function of its temporal position within the interspike interval, and are widely used to predict neuronal responses to more complex stimulus patterns. Significant variability in the interspike interval during spontaneous firing can lead to PRCs with a low signal-to-noise ratio, requiring averaging over thousands of trials. We show using electrophysiological experiments and simulations that the PRC calculated in the traditional way by sampling the interspike interval with brief current pulses is biased. We introduce a corrected approach for calculating PRCs which eliminates this bias. Using our new approach, we show that Purkinje cell PRCs change qualitatively depending on the firing frequency of the cell. At high firing rates, Purkinje cells exhibit single-peaked, or monophasic PRCs. Surprisingly, at low firing rates, Purkinje cell PRCs are largely independent of phase, resembling PRCs of ideal non-leaky integrate-and-fire neurons. These results indicate that Purkinje cells can act as perfect integrators at low firing rates, and that the integration mode of Purkinje cells depends on their firing rate. PMID:20442875
Jalali-Heravi, Mehdi; Parastar, Hadi
2011-08-15
Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed.
Hiltrop, Dennis; Masa, Justus; Botz, Alexander J R; Lindner, Armin; Schuhmann, Wolfgang; Muhler, Martin
2017-03-31
A spectroelectrochemical cell is presented that allows investigations of electrochemical reactions by means of attenuated total reflection infrared (ATR-IR) spectroscopy. The electrode holder for the working (WE), counter and reference electrode as mounted in the IR spectrometer cause the formation of a thin electrolyte layer between the internal reflection element (IRE) and the surface of the WE. The thickness of this thin electrolyte layer (dTL) was estimated by performing a scanning electrochemical microscopy-(SECM) like approach of a Pt microelectrode (ME), which was leveled with the WE toward the IRE surface. The precise lowering of the ME/WE plane toward the IRE was enabled by a micrometer screw. The approach curve was recorded in negative feedback mode of SECM and revealed the contact point of the ME and WE on the IRE, which was used as reference point to perform the electro-oxidation of ethanol over a drop-casted Pd/NCNT catalyst on the WE at different thin-layer thicknesses by cyclic voltammetry. The reaction products were detected in the liquid electrolyte by IR spectroscopy, and the effect of variations in dTL on the current densities and IR spectra were analyzed and discussed. The obtained data identify dTL as an important variable in thin-layer experiments with electrochemical reactions and FTIR readout.
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
AGNfitter: An MCMC Approach to Fitting SEDs of AGN and galaxies
NASA Astrophysics Data System (ADS)
Calistro Rivera, Gabriela; Lusso, Elisabeta; Hennawi, Joseph; Hogg, David W.
2016-08-01
I will present AGNfitter: a tool to robustly disentangle the physical processes responsible for the emission of active galactic nuclei (AGN). AGNfitter is the first open-source algorithm based on a Markov Chain Monte Carlo method to fit the spectral energy distributions of AGN from the sub-mm to the UV. The code makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the host galaxy and the nuclear emission simultaneously. The model consists in four physical components comprising stellar populations, cold dust distributions in star forming regions, accretion disk, and hot dust torus emissions. AGNfitter is well suited to infer numerous parameters that rule the physics of AGN with a proper handling of their confidence levels through the sampling and assumptions-free calculation of their posterior probability distributions. The resulting parameters are, among many others, accretion disk luminosities, dust attenuation for both galaxy and accretion disk, stellar masses and star formation rates. We describe the relevance of this fitting machinery, the technicalities of the code, and show its capabilities in the context of unobscured and obscured AGN. The analyzed data comprehend a sample of 714 X-ray selected AGN of the XMM-COSMOS survey, spectroscopically classified as Type1 and Type2 sources by their optical emission lines. The inference of variate independent obscuration parameters allows AGNfitter to find a classification strategy with great agreement with the spectroscopical classification for ˜ 86% and ˜ 70% for the Type1 and Type2 AGNs respectively. The variety and large number of physical properties inferred by AGNfitter has the potential of contributing to a wide scope of science-cases related to both active and quiescent galaxies studies.
Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif
2017-01-01
Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.
Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan
2017-01-01
Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data. PMID:28738059
Geo-fit Approach to the Analysis of Limb-Scanning Satellite Measurements.
Carlotti, M; Dinelli, B M; Raspollini, P; Ridolfi, M
2001-04-20
We propose a new approach to the analysis of limb-scanning measurements of the atmosphere that are continually recorded from an orbiting platform. The retrieval is based on the simultaneous analysis of observations taken along the whole orbit. This approach accounts for the horizontal variability of the atmosphere, hence avoiding the errors caused by the assumption of horizontal homogeneity along the line of sight of the observations. A computer program that implements the proposed approach has been designed; its performance is shown with a simulated retrieval analysis based on a satellite experiment planned to fly during 2001. This program has also been used for determining the size and the character of the errors that are associated with the assumption of horizontal homogeneity. A computational strategy that reduces the large number of computer resources apparently demanded by the proposed inversion algorithm is described.
Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach
NASA Astrophysics Data System (ADS)
Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.
2014-01-01
The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq3 based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.
Reconstruction of Galaxy Star Formation Histories through SED Fitting: The Dense Basis Approach
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric J.
2017-01-01
The standard assumption of a simplified parametric form for galaxy Star Formation Histories (SFHs) during Spectral Energy Distribution (SED) fitting biases estimations of physical quantities (Stellar Mass, SFR, age) and underestimates their true uncertainties. Here, we describe the Dense Basis formalism, which uses an atlas of well-motivated basis SFHs to provide robust reconstructions of galaxy SFHs and provides estimates of previously inaccessible quantities like the number of star formation episodes in a galaxy's past. We train and validate the method using a sample of realistic SFHs at z=1 drawn from current Semi Analytic Models and Hydrodynamical simulations, as well as SFHs generated using a stochastic prescription. We then apply the method on ~1100 CANDELS galaxies at 1
Electrically detected magnetic resonance modeling and fitting: An equivalent circuit approach
Leite, D. M. G.; Batagin-Neto, A.; Nunes-Neto, O.; Gómez, J. A.; Graeff, C. F. O.
2014-01-21
The physics of electrically detected magnetic resonance (EDMR) quadrature spectra is investigated. An equivalent circuit model is proposed in order to retrieve crucial information in a variety of different situations. This model allows the discrimination and determination of spectroscopic parameters associated to distinct resonant spin lines responsible for the total signal. The model considers not just the electrical response of the sample but also features of the measuring circuit and their influence on the resulting spectral lines. As a consequence, from our model, it is possible to separate different regimes, which depend basically on the modulation frequency and the RC constant of the circuit. In what is called the high frequency regime, it is shown that the sign of the signal can be determined. Recent EDMR spectra from Alq{sub 3} based organic light emitting diodes, as well as from a-Si:H reported in the literature, were successfully fitted by the model. Accurate values of g-factor and linewidth of the resonant lines were obtained.
Imprecision Medicine: A One-Size-Fits-Many Approach for Muscle Dystrophy.
Breitbart, Astrid; Murry, Charles E
2016-04-07
There is still no curative treatment for Duchenne muscular dystrophy (DMD). In this issue of Cell Stem Cell, Young et al. (2016) demonstrate a genome editing approach applicable to 60% of DMD patients with CRISPR/Cas9 using one pair of guide RNAs.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin
2004-01-01
Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…
ERIC Educational Resources Information Center
Marsh, Herbert W.; Hau, Kit-Tai; Wen, Zhonglin
2004-01-01
Goodness-of-fit (GOF) indexes provide "rules of thumb"?recommended cutoff values for assessing fit in structural equation modeling. Hu and Bentler (1999) proposed a more rigorous approach to evaluating decision rules based on GOF indexes and, on this basis, proposed new and more stringent cutoff values for many indexes. This article discusses…
Caserta, Michael S; Lund, Dale A; Utz, Rebecca L; Tabler, Jennifer Lyn
2016-06-01
We concluded in a recent study that a "one size fits all" approach typical of group interventions often does not adequately accommodate the range of situations, life experiences, and current needs of participants. We describe how this limitation informed the design and implementation of an individually-delivered intervention format more specifically tailored to the unique needs of each bereaved person. The intervention comprises one of three interrelated studies within Partners in Hospice Care (PHC), which examines the trajectory from end-of-life care through bereavement among cancer caregivers using hospice. The PHC intervention employs an initial needs assessment in order to tailor the session content, delivery, and sequencing to the most pressing, yet highly diverse needs of the bereaved spouses/partners. Although an individually-delivered format has its own challenges, these can be effectively addressed through standardized interventionist training, regular communication among staff, as well as a flexible approach toward participants' preferences and circumstances.
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.
2016-10-01
Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.
Differences in Adolescent Physical Fitness: A Multivariate Approach and Meta-analysis.
Schutte, Nienke M; Nederend, Ineke; Hudziak, James J; de Geus, Eco J C; Bartels, Meike
2016-03-01
Physical fitness can be defined as a set of components that determine exercise ability and influence performance in sports. This study investigates the genetic and environmental influences on individual differences in explosive leg strength (vertical jump), handgrip strength, balance, and flexibility (sit-and-reach) in 227 healthy monozygotic and dizygotic twin pairs and 38 of their singleton siblings (mean age 17.2 ± 1.2). Heritability estimates were 49% (95% CI 35-60%) for vertical jump, 59% (95% CI 46-69%) for handgrip strength, 38% (95% CI 22-52%) for balance, and 77% (95% CI 69-83%) for flexibility. In addition, a meta-analysis was performed on all twin studies in children, adolescents and young adults reporting heritability estimates for these phenotypes. Fifteen studies, including results from our own study, were meta-analyzed by computing the weighted average heritability. This showed that genetic factors explained most of the variance in vertical jump (62%; 95% CI 47-77%, N = 874), handgrip strength (63%; 95% CI 47-73%, N = 4516) and flexibility (50%; 95% CI 38-61%, N = 1130) in children and young adults. For balance this was 35% (95% CI 19-51%, N = 978). Finally, multivariate modeling showed that the phenotypic correlations between the phenotypes in current study (0.07 < r < 0.27) were mostly driven by genetic factors. It is concluded that genetic factors contribute significantly to the variance in muscle strength, flexibility and balance; factors that may play a key role in the individual differences in adolescent exercise ability and sports performance.
Beyond one-size-fits-all: Tailoring diversity approaches to the representation of social groups.
Apfelbaum, Evan P; Stephens, Nicole M; Reagans, Ray E
2016-10-01
When and why do organizational diversity approaches that highlight the importance of social group differences (vs. equality) help stigmatized groups succeed? We theorize that social group members' numerical representation in an organization, compared with the majority group, influences concerns about their distinctiveness, and consequently, whether diversity approaches are effective. We combine laboratory and field methods to evaluate this theory in a professional setting, in which White women are moderately represented and Black individuals are represented in very small numbers. We expect that focusing on differences (vs. equality) will lead to greater performance and persistence among White women, yet less among Black individuals. First, we demonstrate that Black individuals report greater representation-based concerns than White women (Study 1). Next, we observe that tailoring diversity approaches to these concerns yields greater performance and persistence (Studies 2 and 3). We then manipulate social groups' perceived representation and find that highlighting differences (vs. equality) is more effective when groups' representation is moderate, but less effective when groups' representation is very low (Study 4). Finally, we content-code the diversity statements of 151 major U.S. law firms and find that firms that emphasize differences have lower attrition rates among White women, whereas firms that emphasize equality have lower attrition rates among racial minorities (Study 5). (PsycINFO Database Record
Generalized curve fit and plotting (GECAP) program
NASA Technical Reports Server (NTRS)
Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.; Schlagheck, R. A.
1974-01-01
Program generates graphs on 8 1/2 by 11 inch paper and is designed to be used by engineers and scientists who are not necessarily professional programers. It provides fast and efficient method for display of plotted data without having to generate any additional FORTRAN instructions.
Fitting Learning Curves with Orthogonal Polynomials
1986-12-01
Technology Systems Technical Area Training Research Laboratory U. S. Army Research Institute for the Behavioral and Social Sciences December 1986 Approved for...pubic relotase; distribution unlimited. 13 0" 6 7 %.,P. % N Best Available Copy U. S. ARMY RESEARCH INSTITUTE FOR THE BEHAVIORAL AND SOCIAL ... Social Sciences 2Q263743A794 5001 Eisenhower Avenue, Alexandria, VA 22333-5600 3111C1 It. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE U.S
Ecosystems Biology Approaches To Determine Key Fitness Traits of Soil Microorganisms
NASA Astrophysics Data System (ADS)
Brodie, E.; Zhalnina, K.; Karaoz, U.; Cho, H.; Nuccio, E. E.; Shi, S.; Lipton, M. S.; Zhou, J.; Pett-Ridge, J.; Northen, T.; Firestone, M.
2014-12-01
The application of theoretical approaches such as trait-based modeling represent powerful tools to explain and perhaps predict complex patterns in microbial distribution and function across environmental gradients in space and time. These models are mostly deterministic and where available are built upon a detailed understanding of microbial physiology and response to environmental factors. However as most soil microorganisms have not been cultivated, for the majority our understanding is limited to insights from environmental 'omic information. Information gleaned from 'omic studies of complex systems should be regarded as providing hypotheses, and these hypotheses should be tested under controlled laboratory conditions if they are to be propagated into deterministic models. In a semi-arid Mediterranean grassland system we are attempting to dissect microbial communities into functional guilds with defined physiological traits and are using a range of 'omics approaches to characterize their metabolic potential and niche preference. Initially, two physiologically relevant time points (peak plant activity and prior to wet-up) were sampled and metagenomes sequenced deeply (600-900 Gbp). Following assembly, differential coverage and nucleotide frequency binning were carried out to yield draft genomes. In addition, using a range of cultivation media we have isolated a broad range of bacteria representing abundant bacterial genotypes and with genome sequences of almost 40 isolates are testing genomic predictions regarding growth rate, temperature and substrate utilization in vitro. This presentation will discuss the opportunities and challenges in parameterizing microbial functional guilds from environmental 'omic information for use in trait-based models.
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Zhou, Zhixiong; Ren, Hong; Yin, Zenong; Wang, Lihong; Wang, Kaizhen
2014-05-05
The prevalence of obesity increased while certain measures of physical fitness deteriorated in preschool children in China over the past decade. This study tested the effectiveness of a multifaceted intervention that integrated childcare center, families, and community to promote healthy growth and physical fitness in preschool Chinese children. This 12-month study was conducted using a quasi-experimental pretest/posttest design with comparison group. The participants were 357 children (mean age = 4.5 year) enrolled in three grade levels in two childcare centers in Beijing, China. The intervention included: 1) childcare center intervention (physical activity policy changes, teacher training, physical education curriculum and food services training), 2) family intervention (parent education, internet website for support, and family events), and 3) community intervention (playground renovation and community health promotion events). The study outcome measures included body composition (percent body fat, fat mass, and muscle mass), Body Mass Index (BMI) and BMI z-score and physical fitness scores in 20-meter agility run (20M-AR), broad jump for distance (BJ), timed 10-jumps, tennis ball throwing (TBT), sit and reach (SR), balance beam walk (BBW), 20-meter crawl (20M-C)), 30-meter sprint (30M-S)) from a norm referenced test. Measures of process evaluation included monitoring of children's physical activity (activity time and intensity) and food preparation records, and fidelity of intervention protocol implementation. Children in the intervention center significantly lowered their body fat percent (-1.2%, p < 0.0001), fat mass (-0.55 kg, p <0.0001), and body weight (0.36 kg, p <0.02) and increased muscle mass (0.48 kg, p <0.0001), compared to children in the control center. They also improved all measures of physical fitness except timed 10-jumps (20M-AR: -0.74 seconds, p < 0.0001; BJ: 8.09 cm, p < 0.0001; TBT: 0.52 meters, p < 0.006; SR: 0.88 cm, p < 0.03; BBW: -2
Three-dimensional modeling of the cochlea by use of an arc fitting approach.
Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S
2016-12-01
A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.
Gregg, Evan O; Minet, Emmanuel; McEwan, Michael
2013-09-01
There are established guidelines for bioanalytical assay validation and qualification of biomarkers. In this review, they were applied to a panel of urinary biomarkers of tobacco smoke exposure as part of a "fit for purpose" approach to the assessment of smoke constituents exposure in groups of tobacco product smokers. Clinical studies have allowed the identification of a group of tobacco exposure biomarkers demonstrating a good doseresponse relationship whilst others such as dihydroxybutyl mercapturic acid and 2-carboxy-1-methylethylmercapturic acid - did not reproducibly discriminate smokers and non-smokers. Furthermore, there are currently no agreed common reference standards to measure absolute concentrations and few inter-laboratory trials have been performed to establish consensus values for interim standards. Thus, we also discuss in this review additional requirements for the generation of robust data on urinary biomarkers, including toxicant metabolism and disposition, method validation and qualification for use in tobacco products comparison studies.
Gregg, Evan O.; Minet, Emmanuel
2013-01-01
There are established guidelines for bioanalytical assay validation and qualification of biomarkers. In this review, they were applied to a panel of urinary biomarkers of tobacco smoke exposure as part of a “fit for purpose” approach to the assessment of smoke constituents exposure in groups of tobacco product smokers. Clinical studies have allowed the identification of a group of tobacco exposure biomarkers demonstrating a good doseresponse relationship whilst others such as dihydroxybutyl mercapturic acid and 2-carboxy-1-methylethylmercapturic acid – did not reproducibly discriminate smokers and non-smokers. Furthermore, there are currently no agreed common reference standards to measure absolute concentrations and few inter-laboratory trials have been performed to establish consensus values for interim standards. Thus, we also discuss in this review additional requirements for the generation of robust data on urinary biomarkers, including toxicant metabolism and disposition, method validation and qualification for use in tobacco products comparison studies. PMID:23902266
NASA Astrophysics Data System (ADS)
Nigmatullin, R.; Rakhmatullin, R.
2014-12-01
yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].
Morawe, Ch; Guigay, J-P; Mocella, V; Ferrero, C
2008-09-29
Aberration effects are studied in parabolic and elliptic multilayer mirrors for hard x-rays, basing on a simple analytical approach. The interpretation of the underlying equations provides insight into fundamental limitations of the focusing properties of curved multilayers. Using realistic values for the multilayer parameters the potential impact on the broadening of the focal spot is evaluated. Within the limits of this model, systematic contributions to the spot size can be described. The work is complemented by a comparison with experimental results obtained with a W/B(4)C curved multilayer mirror.
2014-01-01
Background The prevalence of obesity increased while certain measures of physical fitness deteriorated in preschool children in China over the past decade. This study tested the effectiveness of a multifaceted intervention that integrated childcare center, families, and community to promote healthy growth and physical fitness in preschool Chinese children. Methods This 12-month study was conducted using a quasi-experimental pretest/posttest design with comparison group. The participants were 357 children (mean age = 4.5 year) enrolled in three grade levels in two childcare centers in Beijing, China. The intervention included: 1) childcare center intervention (physical activity policy changes, teacher training, physical education curriculum and food services training), 2) family intervention (parent education, internet website for support, and family events), and 3) community intervention (playground renovation and community health promotion events). The study outcome measures included body composition (percent body fat, fat mass, and muscle mass), Body Mass Index (BMI) and BMI z-score and physical fitness scores in 20-meter agility run (20M-AR), broad jump for distance (BJ), timed 10-jumps, tennis ball throwing (TBT), sit and reach (SR), balance beam walk (BBW), 20-meter crawl (20M-C)), 30-meter sprint (30M-S)) from a norm referenced test. Measures of process evaluation included monitoring of children’s physical activity (activity time and intensity) and food preparation records, and fidelity of intervention protocol implementation. Results Children in the intervention center significantly lowered their body fat percent (−1.2%, p < 0.0001), fat mass (−0.55 kg, p <0.0001), and body weight (0.36 kg, p <0.02) and increased muscle mass (0.48 kg, p <0.0001), compared to children in the control center. They also improved all measures of physical fitness except timed 10-jumps (20M-AR: −0.74 seconds, p < 0.0001; BJ: 8.09 cm, p < 0.0001; TBT: 0
2010-01-01
Background Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more
Tsalatsanis, Athanasios; Hozo, Iztok; Vickers, Andrew; Djulbegovic, Benjamin
2010-09-16
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly
Kohli, Nidhi; Hughes, John; Wang, Chun; Zopluoglu, Cengiz; Davison, Mark L
2015-06-01
A linear-linear piecewise growth mixture model (PGMM) is appropriate for analyzing segmented (disjointed) change in individual behavior over time, where the data come from a mixture of 2 or more latent classes, and the underlying growth trajectories in the different segments of the developmental process within each latent class are linear. A PGMM allows the knot (change point), the time of transition from 1 phase (segment) to another, to be estimated (when it is not known a priori) along with the other model parameters. To assist researchers in deciding which estimation method is most advantageous for analyzing this kind of mixture data, the current research compares 2 popular approaches to inference for PGMMs: maximum likelihood (ML) via an expectation-maximization (EM) algorithm, and Markov chain Monte Carlo (MCMC) for Bayesian inference. Monte Carlo simulations were carried out to investigate and compare the ability of the 2 approaches to recover the true parameters in linear-linear PGMMs with unknown knots. The results show that MCMC for Bayesian inference outperformed ML via EM in nearly every simulation scenario. Real data examples are also presented, and the corresponding computer codes for model fitting are provided in the Appendix to aid practitioners who wish to apply this class of models.
Shimizu, Satoru; Osawa, Shigeyuki; Sekiguchi, Tomoko; Mochizuki, Takahiro; Oka, Hidehiro; Kumabe, Toshihiro
2015-12-01
Objectives The one-piece supraorbital approach is a rational approach for the removal of orbital tumors. However, cutting the roof through the orbit is often difficult. We modified the technique to facilitate the osteotomy and improve the cosmetic effect. Design Three burr holes are made: the first, the MacCarty keyhole (burr hole 1), exposes the anterior cranial fossa and orbit; the second is placed above the supraorbital nerve (burr hole 2); and the third on the superior temporal line. Through burr hole 2, a small hole is created on the roof, 10 mm in depth. Next the roof is rongeured through burr hole 1 toward the preexisting small hole. Seamless osteotomies using a diamond-coated threadwire saw and the preexisting four holes are performed. Lastly the flap is removed. On closure, sutures are passed through holes in the cuts made with the threadwire saw, and tied. Results We applied our technique to address orbital tumors in two adult patients. The osteotomies in the roof were easy, and most parts of the roof were repositioned. Conclusions Our modification results in orbital osteotomies with greater preservation of the roof. Because the self-fitting flap does not require the use of fixation devices, the reconstruction is cosmetically satisfactory.
Shimizu, Satoru; Osawa, Shigeyuki; Sekiguchi, Tomoko; Mochizuki, Takahiro; Oka, Hidehiro; Kumabe, Toshihiro
2015-01-01
Objectives The one-piece supraorbital approach is a rational approach for the removal of orbital tumors. However, cutting the roof through the orbit is often difficult. We modified the technique to facilitate the osteotomy and improve the cosmetic effect. Design Three burr holes are made: the first, the MacCarty keyhole (burr hole 1), exposes the anterior cranial fossa and orbit; the second is placed above the supraorbital nerve (burr hole 2); and the third on the superior temporal line. Through burr hole 2, a small hole is created on the roof, 10 mm in depth. Next the roof is rongeured through burr hole 1 toward the preexisting small hole. Seamless osteotomies using a diamond-coated threadwire saw and the preexisting four holes are performed. Lastly the flap is removed. On closure, sutures are passed through holes in the cuts made with the threadwire saw, and tied. Results We applied our technique to address orbital tumors in two adult patients. The osteotomies in the roof were easy, and most parts of the roof were repositioned. Conclusions Our modification results in orbital osteotomies with greater preservation of the roof. Because the self-fitting flap does not require the use of fixation devices, the reconstruction is cosmetically satisfactory. PMID:26682124
Sinus floor elevation with a crestal approach using a press-fit bone block: a case series.
Isidori, M; Genty, C; David-Tchouda, S; Fortin, T
2015-09-01
This prospective study aimed to provide detailed clinical information on a sinus augmentation procedure, i.e., transcrestal sinus floor elevation with a bone block using the press-fit technique. A bone block is harvested with a trephine burr to obtain a cylinder. This block is inserted into the antrum via a crestal approach after creation of a circular crestal window. Thirty-three patients were treated with a fixed prosthesis supported by implants placed on 70 cylindrical bone blocks. The mean bone augmentation was 6.08±2.87 mm, ranging from 0 to 12.7 mm. Only one graft failed before implant placement. During surgery and the subsequent observation period, no complications were recorded, one implant was lost, and no infection or inflammation was observed. This proof-of-concept study suggests that the use of a bone block inserted into the sinus cavity via a crestal approach can be an alternative to the sinus lift procedure with the creation of a lateral window. It reduces the duration of surgery, cost of treatment, and overall discomfort.
Iozzi, Fabrizio; Trusiano, Francesco; Chinazzi, Matteo; Billari, Francesco C.; Zagheni, Emilio; Merler, Stefano; Ajelli, Marco; Del Fava, Emanuele; Manfredi, Piero
2010-01-01
Knowledge of social contact patterns still represents the most critical step for understanding the spread of directly transmitted infections. Data on social contact patterns are, however, expensive to obtain. A major issue is then whether the simulation of synthetic societies might be helpful to reliably reconstruct such data. In this paper, we compute a variety of synthetic age-specific contact matrices through simulation of a simple individual-based model (IBM). The model is informed by Italian Time Use data and routine socio-demographic data (e.g., school and workplace attendance, household structure, etc.). The model is named “Little Italy” because each artificial agent is a clone of a real person. In other words, each agent's daily diary is the one observed in a corresponding real individual sampled in the Italian Time Use Survey. We also generated contact matrices from the socio-demographic model underlying the Italian IBM for pandemic prediction. These synthetic matrices are then validated against recently collected Italian serological data for Varicella (VZV) and ParvoVirus (B19). Their performance in fitting sero-profiles are compared with other matrices available for Italy, such as the Polymod matrix. Synthetic matrices show the same qualitative features of the ones estimated from sample surveys: for example, strong assortativeness and the presence of super- and sub-diagonal stripes related to contacts between parents and children. Once validated against serological data, Little Italy matrices fit worse than the Polymod one for VZV, but better than concurrent matrices for B19. This is the first occasion where synthetic contact matrices are systematically compared with real ones, and validated against epidemiological data. The results suggest that simple, carefully designed, synthetic matrices can provide a fruitful complementary approach to questionnaire-based matrices. The paper also supports the idea that, depending on the transmissibility level of
Iozzi, Fabrizio; Trusiano, Francesco; Chinazzi, Matteo; Billari, Francesco C; Zagheni, Emilio; Merler, Stefano; Ajelli, Marco; Del Fava, Emanuele; Manfredi, Piero
2010-12-02
Knowledge of social contact patterns still represents the most critical step for understanding the spread of directly transmitted infections. Data on social contact patterns are, however, expensive to obtain. A major issue is then whether the simulation of synthetic societies might be helpful to reliably reconstruct such data. In this paper, we compute a variety of synthetic age-specific contact matrices through simulation of a simple individual-based model (IBM). The model is informed by Italian Time Use data and routine socio-demographic data (e.g., school and workplace attendance, household structure, etc.). The model is named "Little Italy" because each artificial agent is a clone of a real person. In other words, each agent's daily diary is the one observed in a corresponding real individual sampled in the Italian Time Use Survey. We also generated contact matrices from the socio-demographic model underlying the Italian IBM for pandemic prediction. These synthetic matrices are then validated against recently collected Italian serological data for Varicella (VZV) and ParvoVirus (B19). Their performance in fitting sero-profiles are compared with other matrices available for Italy, such as the Polymod matrix. Synthetic matrices show the same qualitative features of the ones estimated from sample surveys: for example, strong assortativeness and the presence of super- and sub-diagonal stripes related to contacts between parents and children. Once validated against serological data, Little Italy matrices fit worse than the Polymod one for VZV, but better than concurrent matrices for B19. This is the first occasion where synthetic contact matrices are systematically compared with real ones, and validated against epidemiological data. The results suggest that simple, carefully designed, synthetic matrices can provide a fruitful complementary approach to questionnaire-based matrices. The paper also supports the idea that, depending on the transmissibility level of the
Ying Ouyang; Prem B. Parajuli; Daniel A. Marion
2013-01-01
Pollution of surface water with harmful chemicals and eutrophication of rivers and lakes with excess nutrients are serious environmental concerns. This study estimated surface water quality in a stream within the Yazoo River Basin (YRB), Mississippi, USA, using the duration curve and recurrence interval analysis techniques. Data from the US Geological Survey (USGS)...
ERIC Educational Resources Information Center
Wimmers, Paul F.; Lee, Ming
2015-01-01
To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze…
ERIC Educational Resources Information Center
Wimmers, Paul F.; Lee, Ming
2015-01-01
To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze…
Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad
2013-01-14
Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.
Probing exoplanet clouds with optical phase curves.
Muñoz, Antonio García; Isaak, Kate G
2015-11-03
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5.
NASA Astrophysics Data System (ADS)
Wassmann, A.; Borsdorff, T.; aan de Brugh, J. M. J.; Hasekamp, O. P.; Aben, I.; Landgraf, J.
2015-10-01
We present a sensitivity study of the direct fitting approach to retrieve total ozone columns from the clear sky Global Ozone Monitoring Experiment 2/MetOp-A (GOME-2/MetOp-A) measurements between 325 and 335 nm in the period 2007-2010. The direct fitting of the measurement is based on adjusting the scaling of a reference ozone profile and requires accurate simulation of GOME-2 radiances. In this context, we study the effect of three aspects that introduce forward model errors if not addressed appropriately: (1) the use of a clear sky model atmosphere in the radiative transfer demanding cloud filtering, (2) different approximations of Earth's sphericity to address the influence of the solar zenith angle, and (3) the need of polarization in radiative transfer modeling. We conclude that cloud filtering using the operational GOME-2 FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A band) cloud product, which is part of level 1B data, and the use of pseudo-spherical scalar radiative transfer is fully sufficient for the purpose of this retrieval. A validation with ground-based measurements at 36 stations confirms this showing a global mean bias of -0.1 % with a standard deviation (SD) of 2.7 %. The regularization effect inherent to the profile scaling approach is thoroughly characterized by the total column averaging kernel for each individual retrieval. It characterizes the effect of the particular choice of the ozone profile to be scaled by the inversion and is part of the retrieval product. Two different interpretations of the data product are possible: first, regarding the retrieval product as an estimate of the true column, a direct comparison of the retrieved column with total ozone columns from ground-based measurements can be done. This requires accurate a priori knowledge of the reference ozone profile and the column averaging kernel is not needed. Alternatively, the retrieval product can be interpreted as an effective column defined by the total column
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
NASA Astrophysics Data System (ADS)
Guinan, Edward F.; Hollon, N.; Prsa, A.; Devinney, E.
2007-12-01
Advances in observing technology will greatly increase discovery rates of eclipsing binaries (EBs). For example, missions such as LSST and GAIA are expected to yield hundreds of thousands (even millions) of new EBs. Current personal interactive (and time consuming) methods of determining the orbital and physical parameters of EBs from their light curves will be totally inadequate to keep up with the overwhelming flood of new data. At present the currently used methods require significant technical skill, and even experienced light curve solvers take 2-3 weeks to model a single binary. We are therefore developing an Artificial Intelligence / Neural Network system with the hope of creating a fully automated, high throughput process for gleaning the orbital and physical properties of EB-systems from the observations of tens of thousands of eclipsing binaries at a time. This project is called EBAI - Studying Eclipsing Binaries with Artificial Intelligence (See: http://www.eclipsingbinaries.org). A preliminary test of the neural network's performance has been conducted, using as input the normalized Johnson V-filter flux curves for five detached EBs: KP Aql, AY Cam, WX Cep, DI Her, and BP Vul. These systems have well determined properties from previous detailed photometric and radial velocity analyses. The neural network system has met with promising success in analyzing these systems. The results of this test and additional tests on larger samples of stars will be presented and discussed. This research is supported by NSF/RUI Grant No. AST-05-07542 which we gratefully acknowledge.
From principal curves to granular principal curves.
Zhang, Hongyun; Pedrycz, Witold; Miao, Duoqian; Wei, Zhihua
2014-06-01
Principal curves arising as an essential construct in dimensionality reduction and data analysis have recently attracted much attention from theoretical as well as practical perspective. In many real-world situations, however, the efficiency of existing principal curves algorithms is often arguable, in particular when dealing with massive data owing to the associated high computational complexity. A certain drawback of these constructs stems from the fact that in several applications principal curves cannot fully capture some essential problem-oriented facets of the data dealing with width, aspect ratio, width change, etc. Information granulation is a powerful tool supporting processing and interpreting massive data. In this paper, invoking the underlying ideas of information granulation, we propose a granular principal curves approach, regarded as an extension of principal curves algorithms, to improve efficiency and achieve a sound accuracy-efficiency tradeoff. First, large amounts of numerical data are granulated into C intervals-information granules developed with the use of fuzzy C-means clustering and the two criteria of information granulation, which significantly reduce the amount of data to be processed at the later phase of the overall design. Granular principal curves are then constructed by determining the upper and the lower bounds of the interval data. Finally, we develop an objective function using the criteria of information confidence and specificity to evaluate the granular output formed by the principal curves. We also optimize the granular principal curves by adjusting the level of information granularity (the number of clusters), which is realized with the aid of the particle swarm optimization. A number of numeric studies completed for synthetic and real-world datasets provide a useful quantifiable insight into the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Drilleau, M.; Beucler, É.; Mocquet, A.; Verhoeven, O.; Moebs, G.; Burgos, G.; Montagner, J.-P.; Vacher, P.
2013-11-01
Mineralogical transformations and material transfers within the Earth's mantle make the 350-1000 km depth range (referred here as the mantle transition zone) highly heterogeneous and anisotropic. Most of the 3-D global tomographic models are anchored on small perturbations from 1-D models such as PREM, and are secondly interpreted in terms of temperature and composition distributions. However, the degree of heterogeneity in the transition zone can be strong enough so that the concept of a 1-D reference seismic model must be addressed. To avoid the use of any seismic reference model, we present in this paper a Markov chain Monte Carlo algorithm to directly interpret surface wave dispersion curves in terms of temperature and radial anisotropy distributions, here considering a given composition of the mantle. These interpretations are based on laboratory measurements of elastic moduli and Birch-Murnaghan equation of state. An originality of the algorithm is its ability to explore both smoothly varying models and first-order discontinuities, using C1-Bézier curves, which interpolate the randomly chosen values for depth, temperature and radial anisotropy. This parametrization is able to generate a self-adapting parameter space exploration while reducing the computing time. Thanks to a Bayesian exploration, the probability distributions on temperature and anisotropy are governed by uncertainties on the data set. The method is applied to both synthetic data and real dispersion curves. Though surface wave data are weakly sensitive to the sharpness of the of the mid-mantle seismic discontinuities, the interpretation of the temperature distribution is highly related to the chosen composition and to the modelling of mineralogical phase transformations. Surface wave measurements along the Vanuatu-California path suggest a strong anisotropy above 400 km depth, which decreases below, and a monotonous temperature distribution between 350 and 1000 km depth.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Jaime-Pérez, José C; Monreal-Robles, Roberto; Rodríguez-Romo, Laura N; Mancías-Guerra, Consuelo; Herrera-Garza, José Luís; Gómez-Almaguer, David
2011-11-01
The objective of the study was to evaluate the current standard practice of using volume and total nucleated cell (TNC) count for the selection of cord blood (CB) units for cryopreservation and further transplantation. Data on 794 CB units whose CD34+ cell content was determined by flow cytometry were analyzed by using a receiver operating characteristic (ROC) curve model to validate the performance of volume and TNC count for the selection of CB units with grafting purposes. The TNC count was the best parameter to identify CB units having 2 × 10(6) or more CD34+ cells, with an area under the ROC curve of 0.828 (95% confidence interval, 0.800-0.856; P < .01) and an efficiency of 75.4%. Combination of parameters (TNC/mononuclear cells [MNCs], efficiency 74.7%; TNC/volume, efficiency 68.9%; and volume/MNCs, efficiency 68.3%) did not lead to improvement in CB selection. All CB units having a TNC count of 8 × 10(8) or more had the required CD34+ cell dose for patients weighing 10 kg or less.
NASA Astrophysics Data System (ADS)
Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael
2017-03-01
Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.
Probing exoplanet clouds with optical phase curves
Muñoz, Antonio García; Isaak, Kate G.
2015-01-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5. PMID:26489652
Lien, Laura L.; Steggell, Carmen D.; Iwarsson, Susanne
2015-01-01
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants’ perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well. PMID:26404352
Lien, Laura L; Steggell, Carmen D; Iwarsson, Susanne
2015-09-23
Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants' perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well.
NASA Astrophysics Data System (ADS)
Dobberschütz, Sören; Böhm, Michael
2010-02-01
The behaviour of a free fluid flow above a porous medium, both separated by a curved interface, is investigated. By carrying out a coordinate transformation, we obtain the description of the flow in a domain with a straight interface. Using periodic homogenisation, the effective behaviour of the transformed partial differential equations in the porous part is given by a Darcy law with non-constant permeability matrix. Then the fluid behaviour at the porous-liquid interface is obtained with the help of generalised boundary-layer functions: Whereas the velocity in normal direction is continuous across the interface, a jump appears in tangential direction. Its magnitude seems to be related to the slope of the interface. Therefore the results indicate a generalised law of Beavers and Joseph.
Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C
2012-10-16
A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.
Choi, Eunhee; Tang, Fengyan; Kim, Sung-Geun; Turk, Phillip
2016-10-01
This study examined the longitudinal relationships between functional health in later years and three types of productive activities: volunteering, full-time, and part-time work. Using the data from five waves (2000-2008) of the Health and Retirement Study, we applied multivariate latent growth curve modeling to examine the longitudinal relationships among individuals 50 or over. Functional health was measured by limitations in activities of daily living. Individuals who volunteered, worked either full time or part time exhibited a slower decline in functional health than nonparticipants. Significant associations were also found between initial functional health and longitudinal changes in productive activity participation. This study provides additional support for the benefits of productive activities later in life; engagement in volunteering and employment are indeed associated with better functional health in middle and old age. © The Author(s) 2016.
Feng, Lang; Song, Jian; Zhang, Daoxin; Tian, Ye
2017-01-01
To analyze the mentor-based learning curve of one single surgeon with transurethral plasmakinetic enucleation and resection of prostate (PKERP) prospectively. Ninety consecutive PKERP operations performed by one resident under the supervision of an experienced endourologist were studied. Operations were analyzed in cohorts of 10 cases to determine when a plateau was reached for the variables such as operation efficiency, enucleation efficiency and frequency of mentor advice (FMA). Patient demographic variables, perioperative data, complications and 12-month follow-up data were analyzed and compared with the results of a senior urologist. The mean operative efficiency and enucleation efficiency increased from a mean of 0.49±0.09g/min and 1.11±0.28g/min for the first 10 procedures to a mean of 0.63±0.08g/min and 1.62±0.36g/min for case numbers 31-40 (p=0.003 and p=0.002). The mean value of FMA decreased from a mean of 6.7±1.5 for the first 10 procedures to a mean of 2.8±1.2 for case numbers 31-40 (p<0.01). The senior urologist had a mean operative efficiency and enucleation efficiency equivalent to those of the senior resident after 40 cases. There was significant improvement in 3, 6 and 12 month's parameter compared with preoperative values (p<0.001). PKERP can be performed safely and efficiently even during the initial learning curve of the surgeon when closely mentored. Further well-designed trials with several surgeons are needed to confirm the results. Copyright® by the International Brazilian Journal of Urology.
Testing MONDian dark matter with galactic rotation curves
Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack E-mail: farrah@vt.edu E-mail: takeuchi@vt.edu E-mail: yjng@physics.unc.edu
2014-09-20
MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
NASA Astrophysics Data System (ADS)
Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.
2009-04-01
The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.
Lubke, Gitta H; Campbell, Ian
2016-01-01
Inference and conclusions drawn from model fitting analyses are commonly based on a single "best-fitting" model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data.
Crystallography on Curved Surfaces
NASA Astrophysics Data System (ADS)
Vitelli, Vincenzo; Lucks, Julius; Nelson, David
2007-03-01
We present a theoretical and numerical study of the static and dynamical properties that distinguish two dimensional curved crystals from their flat space counterparts. Experimental realizations include block copolymer mono-layers on lithographically patterned substrates and self-assembled colloidal particles on a curved interface. At the heart of our approach lies a simple observation: the packing of interacting spheres constrained to lie on a curved surface is necessarily frustrated even in the absence of defects. As a result, whenever lattice imperfections or topological defects are introduced in the curved crystal they couple to the pre-stress of geometric frustration giving rise to elastic potentials. These geometric potentials are non-local functions of the Gaussian curvature and depend on the position of the defects. They play an important role in stress relaxation dynamics, elastic instabilities and melting.
NASA Astrophysics Data System (ADS)
Guinan, E. F.; Prša, A.; Devinney, E. J.; Engle, S. G.
2009-08-01
Major advances in observing technology promise to greatly increase discovery rates of eclipsing binaries (EBs). For example, missions such as the Large Synoptic Survey Telescope (LSST), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) and Gaia are expected to yield hundreds of thousands (even millions) of new variable stars and eclipsing binaries. Current personal interactive (and time consuming) methods of determining the physical and orbital parameters of eclipsing binaries from the current practice of analyzing their light curves will be inadequate to keep up with the overwhelming influx of new data. At present, the currently used methods require significant technical skill and experience; it typically takes 2-3 weeks to model a single binary. We are therefore developing an Artificial Intelligence / Neural Network system with the hope of creating a fully automated, high throughput process for gleaning the orbital and physical properties of EB systems from the observations of tens of thousands of eclipsing binaries at a time. The EBAI project -- Eclipsing Binaries with Artificial Intelligence -- aims to provide estimates of principal parameters for thousands of eclipsing binaries in a matter of seconds. Initial tests of the neural network's performance and reliability have been conducted and are presented here.
NASA Astrophysics Data System (ADS)
Cotton, W. D.
Fringe Fitting Theory; Correlator Model Delay Errors; Fringe Fitting Techniques; Baseline; Baseline with Closure Constraints; Global; Solution Interval; Calibration Sources; Source Structure; Phase Referencing; Multi-band Data; Phase-Cals; Multi- vs. Single-band Delay; Sidebands; Filtering; Establishing a Common Reference Antenna; Smoothing and Interpolating Solutions; Bandwidth Synthesis; Weights; Polarization; Fringe Fitting Practice; Phase Slopes in Time and Frequency; Phase-Cals; Sidebands; Delay and Rate Fits; Signal-to-Noise Ratios; Delay and Rate Windows; Details of Global Fringe Fitting; Multi- and Single-band Delays; Phase-Cal Errors; Calibrator Sources; Solution Interval; Weights; Source Model; Suggested Procedure; Bandwidth Synthesis
Wardenaar, Klaas J; Wanders, Rob B K; Roest, Annelieke M; Meijer, Rob R; De Jonge, Peter
2015-06-01
Observed associations between depression following myocardial infarction (MI) and adverse cardiac outcomes could be overestimated due to patients' tendency to over report somatic depressive symptoms. This study was aimed to investigate this issue with modern psychometrics, using item response theory (IRT) and person-fit statistics to investigate if the Beck Depression Inventory (BDI) measures depression or something else among MI-patients. An IRT-model was fit to BDI-data of 1135 MI patients. Patients' adherence to this IRT-model was investigated with person-fit statistics. Subgroups of "atypical" (low person-fit) and "prototypical" (high person-fit) responders were identified and compared in terms of item-response patterns, psychiatric diagnoses, socio-demographics and somatic factors. In the IRT model, somatic items had lower thresholds compared to depressive mood/cognition items. Empirically identified "atypical" responders (n = 113) had more depressive mood/cognitions, scored lower on somatic items and more often had a Comprehensive International Diagnostic Interview (CIDI) depressive diagnosis than "prototypical" responders (n = 147). Additionally, "atypical" responders were younger and more likely to smoke. In conclusion, the BDI measures somatic symptoms in most MI patients, but measures depression in a subgroup of patients with atypical response patterns. The presented approach to account for interpersonal differences in item responding could help improve the validity of depression assessments in somatic patients. Copyright © 2015 John Wiley & Sons, Ltd.
Laffosse, Jean-Michel; Chiron, Philippe; Accadbled, Franck; Molinier, François; Tricoire, Jean-Louis; Puget, Jean
2006-12-01
We analysed the learning curve of an anterolateral minimally invasive (ALMI) approach for primary total hip replacement (THR). The first 42 THR's with large-diameter heads implanted through this approach (group 1) were compared to a cohort of 58 THR's with a 28-mm head performed through a standard-incision posterior approach (group 2). No selection was made and the groups were comparable. Implant positioning as well as early clinical results were satisfactory and were comparable in the two groups. In group 1, the rate of intraoperative complications was significantly higher (greater trochanter fracture in 4 cases, cortical perforation in 3 cases, calcar fracture in one case, nerve palsy in one case, secondary tilting of the metal back in 2 cases) than in group 2 (one nerve palsy and one calcar crack). At 6 months, one revision of the acetabular cup was performed in group 1 for persistent pain, whereas in group 2, we noted 3 dislocations (2 were revised) and 2 periprosthetic femoral fractures. Our study showed a high rate of intra- and perioperative complications during the learning curve for an ALMI approach. These are more likely to occur in obese or osteoporotic patients, and in those with bulky muscles or very stiff hips. Postoperative complications were rare. The early clinical results are excellent and we may expect to achieve better results with a more standardised procedure. During the initial period of the learning curve, it would be preferable to select patients with an appropriate morphology.
Fitting Surge Functions to Data
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2006-01-01
The problem of fitting a surge function to a set of data such as that for a drug response curve is considered. A variety of different techniques are applied, including using some fundamental ideas from calculus, the use of a CAS package, and the use of Excel's regression features for fitting a multivariate linear function to a set of transformed…
Vanderborght, Jan; Vereecken, Harry
2002-01-01
The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.
NASA Astrophysics Data System (ADS)
Cockburn, Bernardo; Kao, Chiu-Yen; Reitich, Fernando
2014-02-01
We present an adaptive spectral/discontinuous Galerkin (DG) method on curved elements to simulate high-frequency wavefronts within a reduced phase-space formulation of geometrical optics. Following recent work, the approach is based on the use of level sets defined by functions satisfying the Liouville equations in reduced phase-space and, in particular, it relies on the smoothness of these functions to represent them by rapidly convergent spectral expansions in the phase variables. The resulting (hyperbolic) system of equations for the coefficients in these expansions are then amenable to a high-order accurate treatment via DG approximations. In the present work, we significantly expand on the applicability and efficiency of the approach by incorporating mechanisms that allow for its use in scattering simulations and for a reduced overall computational cost. With regards to the former we demonstrate that the incorporation of curved elements is necessary to attain any kind of accuracy in calculations that involve scattering off non-flat interfaces. With regards to efficiency, on the other hand, we also show that the level-set formulation allows for a space p-adaptive scheme that under-resolves the level-set functions away from the wavefront without incurring in a loss of accuracy in the approximation of its location. As we show, these improvements enable simulations that are beyond the capabilities of previous implementations of these numerical procedures.
ERIC Educational Resources Information Center
Phelps, Joshua; Smith, Amanda; Parker, Stephany; Hermann, Janice
2016-01-01
Oklahoma Cooperative Extension Service provided elementary school students with a program that included a noncompetitive physical activity component: circuit training that combined cardiovascular, strength, and flexibility activities without requiring high skill levels. The intent was to improve fitness without focusing on body mass index as an…
ERIC Educational Resources Information Center
Phelps, Joshua; Smith, Amanda; Parker, Stephany; Hermann, Janice
2016-01-01
Oklahoma Cooperative Extension Service provided elementary school students with a program that included a noncompetitive physical activity component: circuit training that combined cardiovascular, strength, and flexibility activities without requiring high skill levels. The intent was to improve fitness without focusing on body mass index as an…
A novel approach to fit testing the N95 respirator in real time in a clinical setting.
Or, Peggy; Chung, Joanne; Wong, Thomas
2016-02-01
The instant measurements provided by the Portacount fit-test instrument have been used as the gold standard in predicting the protection of an N95 respirator in a laboratory environment. The conventional Portacount fit-test method, however, cannot deliver real-time measurements of face-seal leakage when the N95 respirator is in use in clinical settings. This research was divided into two stages. Stage 1 involved developing and validating a new quantitative fit-test method called the Personal Respiratory Sampling Test (PRST). In Stage 2, PRST was evaluated in use during nursing activities in clinical settings. Eighty-four participants were divided randomly into four groups and were tested while performing bedside nursing procedures. In Stage 1, a new PRST method was successfully devised and validated. Results of Stage 2 showed that the new PRST method could detect different concentrations and different particle sizes inside the respirator while the wearer performed different nursing activities. This new fit-test method, PRST, can detect face seal leakage of an N95 respirator being worn while the wearer performs clinical activities. Thus, PRST can help ensure that the N95 respirator actually fulfils its function of protecting health-care workers from airborne pathogens.
Fracture toughness curve shift method
Nanstad, R.K.; Sokolov, M.A.; McCabe, D.E.
1995-10-01
The purpose of this task is to examine the technical basis for the currently accepted methods for shifting fracture toughness curves to account for irradiation damage, and to work through national codes and standards bodies to revise those methods, if a change is warranted. During this reporting period, data from all the relevant HSSI Programs were acquired and stored in a database and evaluated. The results from that evaluation have been prepared in a draft letter report and are summarized here. A method employing Weibull statistics was applied to analyze fracture toughness properties of unirradiated and irradiated pressure vessel steels. Application of the concept of a master curve for irradiated materials was examined and used to measure shifts of fracture toughness transition curves. It was shown that the maximum likelihood approach gave good estimations of the reference temperature, T{sub o}, determined by rank method and could be used for analyzing of data sets where application of the rank method did not prove to be feasible. It was shown that, on average, the fracture toughness shifts generally exceeded the Charpy 41-J shifts; a linear least-squares fit to the data set yielded a slope of 1.15. The observed dissimilarity was analyzed by taking into account differences in effects of irradiation on Charpy impact and fracture toughness properties. Based on these comparisons, a procedure to adjust Charpy 41-J shifts for achieving a more reliable correlation with the fracture toughness shifts was evaluated. An adjustment consists of multiplying the 41-J energy level by the ratio of unirradiated to irradiated Charpy upper shelves to determine an irradiated transition temperature, and then subtracting the unirradiated transition temperature determined at 41 J. For LUS welds, however, an unirradiated level of 20 J (15 ft-1b) was used for the corresponding adjustment for irradiated material.
Xu, Guangjian; Zhong, Xiaoxiao; Wang, Yangfan; Warren, Alan; Xu, Henglong
2014-12-01
The functional parameters, i.e., the estimated equilibrium species number (S eq), the colonization rate constant, and the time taken to reach 90 % of S eq (T 90), of microperiphyton fauna have been widely used to determine the water quality status in aquatic ecosystems. The objective of this investigation was to develop a protocol for determining functional parameters of microperiphyton fauna in colonization surveys for marine bioassessment based on rarefaction and regression analyses. The temporal dynamics in species richness of microperiphyton fauna during the colonization period was analyzed based on a dataset of periphytic ciliates in Chinese coastal waters of the Yellow Sea. The results showed that (1) based on observed species richness and estimated maximum species numbers, a total of 16 glass slides were required in order to achieve coefficients of variation of <5 % in the functional parameters; (2) the rarefied average species richness and functional parameters showed weak sensitivity to sampling effort; (3) the temporal variations in average species richness were well-fitted to the MacArthur-Wilson model; and (4) the sampling effort of ~8 glass slides was sufficient to achieve coefficients of variation of <5 % in equilibrium average species number (AvS eq), colonization rate (AvG), and the time to reach 90 % of AvS eq (AvT 90) based on the average species richness. The findings suggest that the AvS eq, AvG, and AvT 90 values based on rarefied average species richness of microperiphyton might be used as reliable ecological indicators for the bioassessment of marine water quality in coastal habitats.