ERIC Educational Resources Information Center
Hester, Yvette
Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.
2013-01-01
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
Metafitting: Weight optimization for least-squares fitting of PTTI data
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
1996-10-01
One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Permutation tests for goodness-of-fit testing of mathematical models to experimental data.
Fişek, M Hamit; Barlas, Zeynep
2013-03-01
This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hajicek, Joshua J.; Selesnick, Ivan W.; Henin, Simon; Talmadge, Carrick L.; Long, Glenis R.
2018-05-01
Stimulus frequency otoacoustic emissions (SFOAEs) were evoked and estimated using swept-frequency tones with and without the use of swept suppressor tones. SFOAEs were estimated using a least-squares fitting procedure. The estimated SFOAEs for the two paradigms (with- and without-suppression) were similar in amplitude and phase. The fitting procedure minimizes the square error between a parametric model of total ear-canal pressure (with unknown amplitudes and phases) and ear-canal pressure acquired during each paradigm. Modifying the parametric model to allow SFOAE amplitude and phase to vary over time revealed additional amplitude and phase fine structure in the without-suppressor, but not the with-suppressor paradigm. The use of a time-varying parametric model to estimate SFOAEs without-suppression may provide additional information about cochlear mechanics not available when using a with-suppressor paradigm.
Optimal measurement of ice-sheet deformation from surface-marker arrays
NASA Astrophysics Data System (ADS)
Macayeal, D. R.
Surface strain rate is best observed by fitting a strain-rate ellipsoid to the measured movement of a stake network or other collection of surface features, using a least squares procedure. Error of the resulting fit varies as 1/(L delta t square root of N), where L is the stake separation, delta is the time period between initial and final stake survey, and n is the number of stakes in the network. This relation suggests that if n is sufficiently high, the traditional practice of revisiting stake-network sites on successive field seasons may be replaced by a less costly single year operation. A demonstration using Ross Ice Shelf data shows that reasonably accurate measurements are obtained from 12 stakes after only 4 days of deformation. It is possible for the least squares procedure to aid airborne photogrammetric surveys because reducing the time interval between survey and re-survey permits better surface feature recognition.
Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi
2014-01-01
Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
Nonlinear least-squares data fitting in Excel spreadsheets.
Kemmer, Gerdi; Keller, Sandro
2010-02-01
We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.
Sader, John E; Yousefi, Morteza; Friend, James R
2014-02-01
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sader, John E., E-mail: jsader@unimelb.edu.au; Yousefi, Morteza; Friend, James R.
2014-02-15
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noisemore » spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.« less
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Structural-Vibration-Response Data Analysis
NASA Technical Reports Server (NTRS)
Smith, W. R.; Hechenlaible, R. N.; Perez, R. C.
1983-01-01
Computer program developed as structural-vibration-response data analysis tool for use in dynamic testing of Space Shuttle. Program provides fast and efficient time-domain least-squares curve-fitting procedure for reducing transient response data to obtain structural model frequencies and dampings from free-decay records. Procedure simultaneously identifies frequencies, damping values, and participation factors for noisy multiple-response records.
NASA Astrophysics Data System (ADS)
Franzetti, Paolo; Scodeggio, Marco
2012-10-01
GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Multistep modeling of protein structure: application towards refinement of tyr-tRNA synthetase
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Shibata, M.; Roychoudhury, M.; Rein, R.
1987-01-01
The scope of multistep modeling (MSM) is expanding by adding a least-squares minimization step in the procedure to fit backbone reconstruction consistent with a set of C-alpha coordinates. The analytical solution of Phi and Psi angles, that fits a C-alpha x-ray coordinate is used for tyr-tRNA synthetase. Phi and Psi angles for the region where the above mentioned method fails, are obtained by minimizing the difference in C-alpha distances between the computed model and the crystal structure in a least-squares sense. We present a stepwise application of this part of MSM to the determination of the complete backbone geometry of the 321 N terminal residues of tyrosine tRNA synthetase to a root mean square deviation of 0.47 angstroms from the crystallographic C-alpha coordinates.
Yamamura, S; Momose, Y
2001-01-16
A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.
Nuclear Matter Properties with the Re-evaluated Coefficients of Liquid Drop Model
NASA Astrophysics Data System (ADS)
Chowdhury, P. Roy; Basu, D. N.
2006-06-01
The coefficients of the volume, surface, Coulomb, asymmetry and pairing energy terms of the semiempirical liquid drop model mass formula have been determined by furnishing best fit to the observed mass excesses. Slightly different sets of the weighting parameters for liquid drop model mass formula have been obtained from minimizations of \\chi 2 and mean square deviation. The most recent experimental and estimated mass excesses from Audi-Wapstra-Thibault atomic mass table have been used for the least square fitting procedure. Equation of state, nuclear incompressibility, nuclear mean free path and the most stable nuclei for corresponding atomic numbers, all are in good agreement with the experimental results.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Model error in covariance structure models: Some implications for power and Type I error
Coffman, Donna L.
2010-01-01
The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302
Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber
NASA Astrophysics Data System (ADS)
Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit
2007-10-01
This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.
A high resolution spectroscopic study of the oxygen molecule. Ph.D. Thesis Final Report
NASA Technical Reports Server (NTRS)
Ritter, K. J.
1984-01-01
A high resolution spectrometer which incorporates a narrow line width tunable dye laser was used to make absorption profiles of 57 spectral lines in the Oxygen A-Band at pressures up to one atmosphere in pure O2. The observed line profiles are compared to the Voigt, and a collisionally narrowed, profile using a least squares fitting procedure. The collisionally narrowed profile compares more favorable to the observed profiles. Values of the line strengths and self broadening coeffiencients, determined from the least square fitting process, are presented in tabular form. It is found that the experssion by Watson are in closest agreement with the experimentally determined strengths. The self broadening coefficients are compared with the measurements of several other investigators.
On Browne's Solution for Oblique Procrustes Rotation
ERIC Educational Resources Information Center
Cramer, Elliot M.
1974-01-01
A form of Browne's (1967) solution of finding a least squares fit to a specified factor structure is given which does not involve solution of an eigenvalue problem. It suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed. (Author/RC)
Combining Approach in Stages with Least Squares for fits of data in hyperelasticity
NASA Astrophysics Data System (ADS)
Beda, Tibi
2006-10-01
The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Some Integrated Squared Error Procedures for Multivariate Normal Data,
1986-01-01
a lnear regresmion or experimental design model). Our procedures have &lSO been usned wcelyOn non -linear models but we do not addres nan-lnear...of fit, outliers, influence functions, experimental design , cluster analysis, robustness 24L A =TO ACT (VCefme - pvre alli of magsy MW identif by...structured data such as multivariate experimental designs . Several illustrations are provided. * 0 %41 %-. 4.’. * " , -.--, ,. -,, ., -, ’v ’ , " ,,- ,, . -,-. . ., * . - tAma- t
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Scatter of X-rays on polished surfaces
NASA Technical Reports Server (NTRS)
Hasinger, G.
1981-01-01
In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.
Direct conversion of rheological compliance measurements into storage and loss moduli.
Evans, R M L; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A
2009-07-01
We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [G'(omega) and G''(omega), respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.
Direct conversion of rheological compliance measurements into storage and loss moduli
NASA Astrophysics Data System (ADS)
Evans, R. M. L.; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A.
2009-07-01
We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [ G'(ω) and G″(ω) , respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
Cubison, M. J.; Jimenez, J. L.
2015-06-05
Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less
Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.
ERIC Educational Resources Information Center
Powell, J. C.
A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…
Code of Federal Regulations, 2014 CFR
2014-07-01
... catalyst. Calculate the least-squared best-fit line through the data. For the data set to be useful for this purpose the data should have an approximately common intercept between 0 and 4000 miles. See the... data between one- and two-times the standard. 2. Estimate the value of R and calculate the effective...
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J
2014-01-01
Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.
A method to estimate statistical errors of properties derived from charge-density modelling
Lecomte, Claude
2018-01-01
Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964
NASA Astrophysics Data System (ADS)
Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej
2017-04-01
The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.
Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.
2017-01-01
We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404
Code of Federal Regulations, 2013 CFR
2013-07-01
... for each catalyst. Calculate the least-squared best-fit line through the data. For the data set to be useful for this purpose the data should have an approximately common intercept between 0 and 4000 miles... testing yields data between one- and two-times the standard. 2. Estimate the value of R and calculate the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... for each catalyst. Calculate the least-squared best-fit line through the data. For the data set to be useful for this purpose the data should have an approximately common intercept between 0 and 4000 miles... testing yields data between one- and two-times the standard. 2. Estimate the value of R and calculate the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... for each catalyst. Calculate the least-squared best-fit line through the data. For the data set to be useful for this purpose the data should have an approximately common intercept between 0 and 4000 miles... testing yields data between one- and two-times the standard. 2. Estimate the value of R and calculate the...
Code of Federal Regulations, 2012 CFR
2012-07-01
... for each catalyst. Calculate the least-squared best-fit line through the data. For the data set to be useful for this purpose the data should have an approximately common intercept between 0 and 4000 miles... testing yields data between one- and two-times the standard. 2. Estimate the value of R and calculate the...
Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model
ERIC Educational Resources Information Center
Kang, Taehoon; Chen, Troy T.
2011-01-01
The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…
Takehira, Rieko; Momose, Yasunori; Yamamura, Shigeo
2010-10-15
A pattern-fitting procedure using an X-ray diffraction pattern was applied to the quantitative analysis of binary system of crystalline pharmaceuticals in tablets. Orthorhombic crystals of isoniazid (INH) and mannitol (MAN) were used for the analysis. Tablets were prepared under various compression pressures using a direct compression method with various compositions of INH and MAN. Assuming that X-ray diffraction pattern of INH-MAN system consists of diffraction intensities from respective crystals, observed diffraction intensities were fitted to analytic expression based on X-ray diffraction theory and separated into two intensities from INH and MAN crystals by a nonlinear least-squares procedure. After separation, the contents of INH were determined by using the optimized normalization constants for INH and MAN. The correction parameter including all the factors that are beyond experimental control was required for quantitative analysis without calibration curve. The pattern-fitting procedure made it possible to determine crystalline phases in the range of 10-90% (w/w) of the INH contents. Further, certain characteristics of the crystals in the tablets, such as the preferred orientation, size of crystallite, and lattice disorder were determined simultaneously. This method can be adopted to analyze compounds whose crystal structures are known. It is a potentially powerful tool for the quantitative phase analysis and characterization of crystals in tablets and powders using X-ray diffraction patterns. Copyright 2010 Elsevier B.V. All rights reserved.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1.5 months. Instabilities during large time gaps in the data are suppressed by introducing an expectation of minimum roughness on the fitted time series. Our next significant computational step involves a constrained least squares fit to the observed NDVI data. Residuals between the observed NDVI value and the predicted starting model are computed, and the inverse of these residuals provide the weights for a weighted least squares analysis whereby a set of annual eighth-order splines are fit to the 7 years of NDVI data. Although a series of independent 8-th order annual functionals over a period of 7 years is intrinsically unstable when there are significant data gaps, the splined versions for this specific application are quite stable due to explicit continuity conditions on the values and derivatives of the functionals across contiguous years, as well as a priori constraints on the predicted values vis-a-vis the assumed initial model. Our procedure allows us to robustly interpolate original unequally-spaced NDVI data with a new time series having the most-appropriate, user-defined time base. We apply this approach to the temporal behavior of vegetation in our 150 x 150 km study area. Such a small area, being so rich in vegetation diversity, is particularly useful to view in map form and by animated annual and multi-year time sequences, since the interrelation between phenology, topography and specific usage patterns becomes clear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizukami, Wataru, E-mail: wataru.mizukami@bristol.ac.uk; Tew, David P., E-mail: david.tew@bristol.ac.uk; Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk
2014-10-14
We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%–0.5% up tomore » 25 000 cm{sup −1} above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm{sup −1} and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm{sup −1}, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm{sup −1}.« less
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
The cancellous bone multiscale morphology-elasticity relationship.
Agić, Ante; Nikolić, Vasilije; Mijović, Budimir
2006-06-01
The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.
Yamamura, Shigeo; Momose, Yasunori
2003-06-18
The purpose of this study is to characterize the monoclinic crystals in tablets by using X-ray powder diffraction data and to evaluate the deformation feature of crystals during compression. The monoclinic crystals of acetaminophen and benzoic acid were used as the samples. The observed X-ray diffraction intensities were fitted to the analytic expression, and the fitting parameters, such as the lattice parameters, the peak-width parameters, the preferred orientation parameter and peak asymmetric parameter were optimized by a non-linear least-squares procedure. The Gauss and March distribution functions were used to correct the preferred orientation of crystallites in the tablet. The March function performed better in correcting the modification of diffraction intensity by preferred orientation of crystallites, suggesting that the crystallites in the tablets had fiber texture with axial orientation. Although a broadening of diffraction peaks was observed in acetaminophen tablets with an increase of compression pressure, little broadening was observed in the benzoic tablets. These results suggest that "acetaminophen is a material consolidating by fragmentation of crystalline particles and benzoic acid is a material consolidating by plastic deformation then occurred rearrangement of molecules during compression". A pattern-fitting procedure is the superior method for characterizing the crystalline drugs of monoclinic crystals in the tablets, as well as orthorhombic isoniazid and mannitol crystals reported in the previous paper.
Goodness-of-fit tests for open capture-recapture models
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1985-01-01
General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.
Assessing the fit of site-occupancy models
MacKenzie, D.I.; Bailey, L.L.
2004-01-01
Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.
Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R
2017-03-01
We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method
NASA Technical Reports Server (NTRS)
Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.
2003-01-01
This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.
Year-round measurements of ozone at 66 deg S with a visible spectrometer
NASA Technical Reports Server (NTRS)
Roscoe, Howard K.; Oldham, Derek J.; Squires, James A. C.; Pommereau, Jean-Pierre; Goutail, Florence; Sarkissian, Alain
1994-01-01
In March 1990, a zenith-sky UV-visible spectrometer of the design 'Systeme Automatique d'Obervation Zenithal' (SAOZ) was installed at Faraday in Antarctica (66.3 deg S, 64.3 deg W). SAOZ records spectra between 290 and 600 nm during daylight. Its analysis program fits laboratory spectra of constituents, at various wavelengths, to the differential of the ratio of the observed spectrum and a reference spectrum. The least-squares fitting procedure minimizes the sum-of-squares of residuals. Ozone is deduced from absorption in its visible bands between 500 and 560 nm. The fortunate colocation of this SAOZ with the well-calibrated Dobson at Faraday has allowed us to examine the calibration of the zero of the SAOZ, difficult at visible wavelengths because of the small depth of absorption. Here we describe recent improvements and limitations to this calibration, and discuss SAOZ measurements of ozone during winter in this important location at the edge of the Antarctic vortex.
A nonparametric smoothing method for assessing GEE models with longitudinal binary data.
Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu
2008-09-30
Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.
Prediction Analysis for Measles Epidemics
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi
2003-12-01
A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.
The Lanchester square-law model extended to a (2,2) conflict
NASA Astrophysics Data System (ADS)
Colegrave, R. K.; Hyde, J. M.
1993-01-01
A natural extension of the Lanchester (1,1) square-law model is the (M,N) linear model in which M forces oppose N forces with constant attrition rates. The (2,2) model is treated from both direct and inverse viewpoints. The inverse problem means that the model is to be fitted to a minimum number of observed force levels, i.e. the attrition rates are to be found from the initial force levels together with the levels observed at two subsequent times. An approach based on Hamiltonian dynamics has enabled the authors to derive a procedure for solving the inverse problem, which is readily computerized. Conflicts in which participants unexpectedly rally or weaken must be excluded.
Automatic evaluations and exercise setting preference in frequent exercisers.
Antoniewicz, Franziska; Brand, Ralf
2014-12-01
The goals of this study were to test whether exercise-related stimuli can elicit automatic evaluative responses and whether automatic evaluations reflect exercise setting preference in highly active exercisers. An adapted version of the Affect Misattribution Procedure was employed. Seventy-two highly active exercisers (26 years ± 9.03; 43% female) were subliminally primed (7 ms) with pictures depicting typical fitness center scenarios or gray rectangles (control primes). After each prime, participants consciously evaluated the "pleasantness" of a Chinese symbol. Controlled evaluations were measured with a questionnaire and were more positive in participants who regularly visited fitness centers than in those who reported avoiding this exercise setting. Only center exercisers gave automatic positive evaluations of the fitness center setting (partial eta squared = .08). It is proposed that a subliminal Affect Misattribution Procedure paradigm can elicit automatic evaluations to exercising and that, in highly active exercisers, these evaluations play a role in decisions about the exercise setting rather than the amounts of physical exercise. Findings are interpreted in terms of a dual systems theory of social information processing and behavior.
Direct Mask Overlay Inspection
NASA Astrophysics Data System (ADS)
Hsia, Liang-Choo; Su, Lo-Soun
1983-11-01
In this paper, we present a mask inspection methodology and procedure that involves direct X-Y measurements. A group of dice is selected for overlay measurement; four measurement targets were laid out in the kerf of each die. The measured coordinates are then fit-ted to either a "historical" grid, which reflects the individual tool bias, or to an ideal grid squares fashion. Measurements are done using a Nikon X-Y laser interferometric measurement system, which provides a reference grid. The stability of the measurement system is essential. We then apply appropriate statistics to the residual after the fit to determine the overlay performance. Statistical methods play an important role in the product disposition. The acceptance criterion is, however, a compromise between the cost for mask making and the final device yield. In order to satisfy the demand on mask houses for quality of masks and high volume, mixing lithographic tools in mask making has become more popular, in particular, mixing optical and E-beam tools. In this paper, we also discuss the inspection procedure for mixing different lithographic tools.
Process Simulation and Modeling for Advanced Intermetallic Alloys.
1994-06-01
calorimetry, using a Stanton Redfera/Omnitherm DOC 1500 thermal analysis system, was the primary experimental tool for this investigation...samples during both heating and cooling in a high purity argon atmosphere at a rate of 20K/min. The DSC instrumental baseline was obtained using both empty...that is capable of fitting the observed data to given cell structures using a least squares procedure. RESULTS The results of the DOC observations are
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
NASA Astrophysics Data System (ADS)
Kiamehr, Ramin
2016-04-01
One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.
AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods
NASA Technical Reports Server (NTRS)
Crowley, J. K.; Clark, R. N.
1992-01-01
Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Quantitative hard x-ray phase contrast imaging of micropipes in SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohn, V. G.; Argunova, T. S.; Je, J. H., E-mail: jhje@postech.ac.kr
2013-12-15
Peculiarities of quantitative hard x-ray phase contrast imaging of micropipes in SiC are discussed. The micropipe is assumed as a hollow cylinder with an elliptical cross section. The major and minor diameters can be restored using the least square fitting procedure by comparing the experimental data, i.e. the profile across the micropipe axis, with those calculated based on phase contrast theory. It is shown that one projection image gives an information which does not allow a complete determination of the elliptical cross section, if an orientation of micropipe is not known. Another problem is a weak accuracy in estimating themore » diameters, partly because of using pink synchrotron radiation, which is necessary because a monochromatic beam intensity is not sufficient to reveal the weak contrast from a very small object. The general problems of accuracy in estimating the two diameters using the least square procedure are discussed. Two experimental examples are considered to demonstrate small as well as modest accuracies in estimating the diameters.« less
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting
NASA Astrophysics Data System (ADS)
Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen
2017-06-01
The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Magnotti, Gaetano
2010-01-01
The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farreng, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least squares fitting procedure to obtain the ground electronic state electric-dipole-moment function of carbon monoxide valid in the range of nuclear oscillation (0.87 to 1.01 A) of about the V = 38th vibrational level. Mechanical anharmonicity intensity factors, H, are computed from this function for delta V + = 1, 2, 3, with or = to 38.
NASA Technical Reports Server (NTRS)
Chackerian, C., Jr.; Farrenq, R.; Guelachvili, G.; Rossetti, C.; Urban, W.
1984-01-01
Experimental intensity information is combined with numerically obtained vibrational wave functions in a nonlinear least-squares fitting procedure to obtain the ground electronic state electric dipole moment function of carbon monoxide valid in the range of nuclear oscillation (0.87-1.91 A) of about the V = 38th vibrational level. Vibrational transition matrix elements are computed from this function for Delta V = 1, 2, 3 with V not more than 38.
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
NASA Astrophysics Data System (ADS)
Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2018-02-01
Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
1980-09-01
HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
Automated generation of influence functions for planar crack problems
NASA Technical Reports Server (NTRS)
Sire, Robert A.; Harris, David O.; Eason, Ernest D.
1989-01-01
A numerical procedure for the generation of influence functions for Mode I planar problems is described. The resulting influence functions are in a form for convenient evaluation of stress-intensity factors for complex stress distributions. Crack surface displacements are obtained by a least-squares solution of the Williams eigenfunction expansion for displacements in a cracked body. Discrete values of the influence function, evaluated using the crack surface displacements, are curve fit using an assumed functional form. The assumed functional form includes appropriate limit-behavior terms for very deep and very shallow cracks. Continuous representation of the influence function provides a convenient means for evaluating stress-intensity factors for arbitrary stress distributions by numerical integration. The procedure is demonstrated for an edge-cracked strip and a radially cracked disk. Comparisons with available published results demonstrate the accuracy of the procedure.
Semivariogram modeling by weighted least squares
Jian, X.; Olea, R.A.; Yu, Y.-S.
1996-01-01
Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
Fit between Africa and Antarctica: A Continental Drift Reconstruction.
Dietz, R S; Sproll, W P
1970-03-20
A computerized (smallest average misfit) best fit position is obtained for the juxtaposition of Africa and Antarctica in a continental drift reconstruction. An S-shaped portion of the Weddell and Princess Martha Coast regions of western East Antarctica is fitted into a similar profile along southeastern Africa. The total amount of overlap is 36,300 square kilometers, and the underlap is 23,600 square kilometers; the total mismatch is thus of 59,900 square kilometers. The congruency along the 1000-fathom isobath is remarkably good and suggests that this reconstruction is valid within the overall framework of the Gondwana supercontinent.
Pre-processing by data augmentation for improved ellipse fitting.
Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J
2018-01-01
Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
Effect of CNC-milling on the marginal and internal fit of dental ceramics: a pilot study.
Schaefer, Oliver; Kuepper, Harald; Thompson, Geoffrey A; Cachovan, Georg; Hefti, Arthur F; Guentsch, Arndt
2013-08-01
Machined restorations have been investigated for their preciseness before, while detailed information on the milling-step itself are lacking. Therefore, the aim of this laboratory study was to quantify the effect of a novel milling-procedure on the marginal and internal fit of ceramic restorations. An acrylic model of a lower left first molar was prepared to receive a ceramic partial crown and was duplicated by one step dual viscosity impressions. Gypsum casts were formed and laser-scanned to realize virtual datasets, before restorations were designed, exported (PRE) and machined from lithium disilicate blanks. Crowns were digitized by a structure-light-scanner to obtain post-milling-data (POST). PRE and POST were virtually superimposed on the reference tooth and subjected to computer-aided-inspection. Visual fit-discrepancies were displayed with colors, while root mean square deviations (RMSD) and degrees of similarity (DS) were computed and analysed by t-tests for paired samples (n=5, α=0.05). The milling procedure resulted in a small increase of the marginal and internal fit discrepancies (RMSD mean: 3μm and 6μm, respectively). RMSD differences were not statistically significant (p=0.495 and p=0.160 for marginal and internal fit, respectively). These results were supported by the DS data. The products of digital dental workflows are prone to imprecisions. However, the present findings suggest that differences between computer-aided designed and actually milled restorations are small, especially when compared to typical fit discrepancies observed clinically. Imprecisions introduced by digital design or production processes are small. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code
NASA Astrophysics Data System (ADS)
Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya
2017-12-01
We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1
Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra
NASA Astrophysics Data System (ADS)
Karstens, William; Smith, David
2013-03-01
Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.
Solar differential rotation in the period 1964-2016 determined by the Kanzelhöhe data set
NASA Astrophysics Data System (ADS)
Poljančić Beljan, I.; Jurdana-Šepić, R.; Brajša, R.; Sudar, D.; Ruždjak, D.; Hržina, D.; Pötzi, W.; Hanslmeier, A.; Veronig, A.; Skokić, I.; Wöhl, H.
2017-10-01
Context. Kanzelhöhe Observatory for Solar and Environmental Research (KSO) provides daily multispectral synoptic observations of the Sun using several telescopes. In this work we made use of sunspot drawings and full disk white light CCD images. Aims: The main aim of this work is to determine the solar differential rotation by tracing sunspot groups during the period 1964-2016, using the KSO sunspot drawings and white light images. We also compare the differential rotation parameters derived in this paper from the KSO with those collected fromf other data sets and present an investigation of the north - south rotational asymmetry. Methods: Two procedures for the determination of the heliographic positions were applied: an interactive procedure on the KSO sunspot drawings (1964-2008, solar cycles Nos. 20-23) and an automatic procedure on the KSO white light images (2009-2016, solar cycle No. 24). For the determination of the synodic angular rotation velocities two different methods have been used: a daily shift (DS) method and a robust linear least-squares fit (rLSQ) method. Afterwards, the rotation velocities had to be converted from synodic to sidereal, which were then used in the least-squares fitting for the solar differential rotation law. A comparison of the interactive and automatic procedures was performed for the year 2014. Results: The interactive procedure of position determination is fairly accurate but time consuming. In the case of the much faster automatic procedure for position determination, we found the rLSQ method for calculating rotational velocities to be more reliable than the DS method. For the test data from 2014, the rLSQ method gives a relative standard error for the differential rotation parameter B that is three times smaller than the corresponding relative standard error derived for the DS method. The best fit solar differential rotation profile for the whole time period is ω(b) = (14.47 ± 0.01)-(2.66 ± 0.10)sin2b (deg/day) for the DS method and ω(b) = (14.50 ± 0.01)-(2.87 ± 0.12)sin2b (deg/day) for the rLSQ method. A barely noticeable north - south asymmetry is observed for the whole time period 1964-2016 in the present paper. Rotation profiles, using different data sets, presented by other authors for the same time periods and the same tracer types, are in good agreement with our results. Conclusions: The KSO data set used in this paper is in good agreement with the Debrecen Photoheliographic Data and Greenwich Photoheliographic Results and is suitable for the investigation of the long-term variabilities in the solar rotation profile. Also, the quality of the KSO sunspot drawings has gradually increased during the last 50 yr.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
Why Might Relative Fit Indices Differ between Estimators?
ERIC Educational Resources Information Center
Weng, Li-Jen; Cheng, Chung-Ping
1997-01-01
Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…
Mean gravity anomalies and sea surface heights derived from GEOS-3 altimeter data
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1978-01-01
Approximately 2000 GEOS-3 altimeter arcs were analyzed to improve knowledge of the geoid and gravity field. An adjustment procedure was used to fit the sea surface heights (geoid undulations) in an adjustment process that incorporated cross-over constraints. The error model used for the fit was a one or two parameter model which was designed to remove altimeter bias and orbit error. The undulations on the adjusted arcs were used to produce geoid maps in 20 regions. The adjusted data was used to derive 301 5 degree equal area anomalies and 9995 1 x 1 degree anomalies in areas where the altimeter data was most dense, using least squares collocation techniques. Also emphasized was the ability of the altimeter data to imply rapid anomaly changes of up to 240 mgals in adjacent 1 x 1 degree blocks.
Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O
2012-09-01
Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
NASA Astrophysics Data System (ADS)
Rowland, David J.; Biteen, Julie S.
2017-04-01
Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
On Least Squares Fitting Nonlinear Submodels.
ERIC Educational Resources Information Center
Bechtel, Gordon G.
Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Chi-squared and C statistic minimization for low count per bin data
NASA Astrophysics Data System (ADS)
Nousek, John A.; Shue, David R.
1989-07-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy
NASA Technical Reports Server (NTRS)
Nousek, John A.; Shue, David R.
1989-01-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Innovative Use of Thighplasty to Improve Prosthesis Fit and Function in a Transfemoral Amputee.
Kuiken, Todd A; Fey, Nicholas P; Reissman, Timothy; Finucane, Suzanne B; Dumanian, Gregory A
2018-01-01
Excess residual limb fat is a common problem that can impair prosthesis control and negatively impact gait. In the general population, thighplasty and liposuction are commonly performed for cosmetic reasons but not specifically to improve function in amputees. The objective of this study was to determine if these procedures could enhance prosthesis fit and function in an overweight above-knee amputee. We evaluated the use of these techniques on a 50-year-old transfemoral amputee who was overweight. The patient underwent presurgical imaging and tests to measure her residual limb tissue distribution, socket-limb interface stiffness, residual femur orientation, lower-extremity function, and prosthesis satisfaction. A medial thighplasty procedure with circumferential liposuction was performed, during which 2,812 g (6.2 lbs.) of subcutaneous fat and skin was removed from her residual limb. Imaging was repeated 5 months postsurgery; functional assessments were repeated 9 months postsurgery. The patient demonstrated notable improvements in socket fit and in performing most functional and walking tests. Her comfortable walking speed increased 13.3%, and her scores for the Sit-to-Stand and Four Square Step tests improved over 20%. Femur alignment in her socket changed from 8.13 to 4.14 degrees, and analysis showed a marked increase in the socket-limb interface stiffness. This study demonstrates the potential of using a routine plastic surgery procedure to modify the intrinsic properties of the limb and to improve functional outcomes in overweight or obese transfemoral amputees. This technique is a potentially attractive option compared with multiple reiterations of sockets, which can be time-consuming and costly.
Warenghem, Marc; Henninot, Jean François; Blach, Jean François; Buchnev, Oleksandr; Kaczmarek, Malgosia; Stchakovsky, Michel
2012-03-01
Spectroscopic ellipsometry is a technique especially well suited to measure the effective optical properties of a composite material. However, as the sample is optically thick and anisotropic, this technique loses its accuracy for two reasons: anisotropy means that two parameters have to be determined (ordinary and extraordinary indices) and optically thick means a large order of interference. In that case, several dielectric functions can emerge out of the fitting procedure with a similar mean square error and no criterion to discriminate the right solution. In this paper, we develop a methodology to overcome that drawback. It combines ellipsometry with refractometry. The same sample is used in a total internal reflection (TIR) setup and in a spectroscopic ellipsometer. The number of parameters to be determined by the fitting procedure is reduced in analysing two spectra, the correct final solution is found by using the TIR results both as initial values for the parameters and as check for the final dielectric function. A prefitting routine is developed to enter the right initial values in the fitting procedure and so to approach the right solution. As an example, this methodology is used to analyse the optical properties of BaTiO(3) nanoparticles embedded in a nematic liquid crystal. Such a methodology can also be used to analyse experimentally the validity of the mixing laws, since ellipsometry gives the effective dielectric function and thus, can be compared to the dielectric function of the components of the mixture, as it is shown on the example of BaTiO(3)/nematic composite.
The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, W. J.; Audi, G.; Wang, Meng
This paper is the first of two articles (Part I and Part II) that presents the results of the new atomic mass evaluation, Ame2016. It includes complete information on the experimental input data (also including unused and rejected ones), as well as details on the evaluation procedures used to derive the tables of recommended values given in the second part. This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction, decay and mass-spectrometric results. These input values were entered in the least-squares adjustment for determining the best values for the atomic massesmore » and their uncertainties. Details of the calculation and particularities of the Ame are then described. All accepted and rejected data, including outweighted ones, are presented in a tabular format and compared with the adjusted values obtained using the least-squares fit analysis. Differences with the previous Ame2012 evaluation are discussed and specific information is presented for several cases that may be of interest to Ame users. The second Ame2016 article gives a table with the recommended values of atomic masses, as well as tables and graphs of derived quantities, along with the list of references used in both the Ame2016 and the Nubase2016 evaluations (the first paper in this issue). Amdc: http://amdc.impcas.ac.cn/« less
Rasch fit statistics and sample size considerations for polytomous data.
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-05-29
Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.
Rasch fit statistics and sample size considerations for polytomous data
Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael
2008-01-01
Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Lewy, Serge
2008-07-01
Spinning modes generated by a ducted turbofan at a given frequency determine the acoustic free-field directivity. An inverse method starting from measured directivity patterns is interesting in providing information on the noise sources without requiring tedious spinning-mode experimental analyses. According to a previous article, equations are based on analytical modal splitting inside a cylindrical duct and on a Rayleigh or a Kirchhoff integral on the duct exit cross section to get far-field directivity. Equations are equal in number to free-field measurement locations and the unknowns are the propagating mode amplitudes (there are generally more unknowns than equations). A MATLAB procedure has been implemented by using either the pseudoinverse function or the backslash operator. A constraint comes from the fact that squared modal amplitudes must be positive which involves an iterative least squares fitting. Numerical simulations are discussed along with several examples based on tests performed by Rolls-Royce in the framework of a European project. It is assessed that computation is very fast and it well fits the measured directivities, but the solution depends on the method and is not unique. This means that the initial set of modes should be chosen according to any known physical property of the acoustic sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S; Fan, Q; Lei, Y
Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less
Seol, Hyunsoo
2016-06-01
The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Lim, Changwon
2015-03-30
Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2007-01-01
SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
Retransformation bias in a stem profile model
Raymond L. Czaplewski; David Bruce
1990-01-01
An unbiased profile model, fit to diameter divided by diameter at breast height, overestimated volume of 5.3-m log sections by 0.5 to 3.5%. Another unbiased profile model, fit to squared diameter divided by squared diameter at breast height, underestimated bole diameters by 0.2 to 2.1%. These biases are caused by retransformation of the predicted dependent variable;...
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo
2015-02-01
In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.
The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment procedures
NASA Astrophysics Data System (ADS)
Huang, W. J.; Audi, G.; Wang, Meng; Kondev, F. G.; Naimi, S.; Xu, Xing
2017-03-01
This paper is the first of two articles (Part I and Part II) that presents the results of the new atomic mass evaluation, AME2016. It includes complete information on the experimental input data (also including unused and rejected ones), as well as details on the evaluation procedures used to derive the tables of recommended values given in the second part. This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction, decay and mass-spectrometric results. These input values were entered in the least-squares adjustment for determining the best values for the atomic masses and their uncertainties. Details of the calculation and particularities of the AME are then described. All accepted and rejected data, including outweighted ones, are presented in a tabular format and compared with the adjusted values obtained using the least-squares fit analysis. Differences with the previous AME2012 evaluation are discussed and specific information is presented for several cases that may be of interest to AME users. The second AME2016 article gives a table with the recommended values of atomic masses, as well as tables and graphs of derived quantities, along with the list of references used in both the AME2016 and the NUBASE2016 evaluations (the first paper in this issue). AMDC: http://amdc.impcas.ac.cn/ Contents The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment proceduresAcrobat PDF (1.2 MB) Table I. Input data compared with adjusted valuesAcrobat PDF (1.3 MB)
NASA Astrophysics Data System (ADS)
Sein, Lawrence T.
2011-08-01
Hammett parameters σ' were determined from vertical ionization potentials, vertical electron affinities, adiabatic ionization potentials, adiabatic electron affinities, HOMO, and LUMO energies of a series of N, N' -bis (3',4'-substituted-phenyl)-1,4-quinonediimines computed at the B3LYP/6-311+G(2d,p) level on B3LYP/6-31G ∗ molecular geometries. These parameters were then least squares fit as a function of literature Hammett parameters. For N, N' -bis (4'-substituted-phenyl)-1,4-quinonediimines, the least squares fits demonstrated excellent linearity, with the square of Pearson's correlation coefficient ( r2) greater than 0.98 for all isomers. For N, N' -bis (3'-substituted-3'-aminophenyl)-1,4-quinonediimines, the least squares fits were less nearly linear, with r2 approximately 0.70 for all isomers when derived from calculated vertical ionization potentials, but those from calculated vertical electron affinities usually greater than 0.90.
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
NASA Technical Reports Server (NTRS)
Byrne, K. P.; Marshall, S. E.
1983-01-01
A procedure for experimentally determining, in terms of the particle motions, the shapes of the low order acoustic modes in enclosures is described. The procedure is based on finding differentiable functions which approximate the shape functions of the low order acoustic modes when these modes are defined in terms of the acoustic pressure. The differentiable approximating functions are formed from polynomials which are fitted by a least squares procedure to experimentally determined values which define the shapes of the low order acoustic modes in terms of the acoustic pressure. These experimentally determined values are found by a conventional technique in which the transfer functions, which relate the acoustic pressures at an array of points in the enclosure to the volume velocity of a fixed point source, are measured. The gradient of the function which approximates the shape of a particular mode in terms of the acoustic pressure is evaluated to give the mode shape in terms of the particle motion. The procedure was tested by using it to experimentally determine the shapes of the low order acoustic modes in a small rectangular enclosure.
New analysis strategies for micro aspheric lens metrology
NASA Astrophysics Data System (ADS)
Gugsa, Solomon Abebe
Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.
Ching, Teresa Y C; Quar, Tian Kar; Johnson, Earl E; Newall, Philip; Sharma, Mridula
2015-03-01
An important goal of providing amplification to children with hearing loss is to ensure that hearing aids are adjusted to match targets of prescriptive procedures as closely as possible. The Desired Sensation Level (DSL) v5 and the National Acoustic Laboratories' prescription for nonlinear hearing aids, version 1 (NAL-NL1) procedures are widely used in fitting hearing aids to children. Little is known about hearing aid fitting outcomes for children with severe or profound hearing loss. The purpose of this study was to investigate the prescribed and measured gain of hearing aids fit according to the NAL-NL1 and the DSL v5 procedure for children with moderately severe to profound hearing loss; and to examine the impact of choice of prescription on predicted speech intelligibility and loudness. Participants were fit with Phonak Naida V SP hearing aids according to the NAL-NL1 and DSL v5 procedures. The Speech Intelligibility Index (SII) and estimated loudness were calculated using published models. The sample consisted of 16 children (30 ears) aged between 7 and 17 yr old. The measured hearing aid gains were compared with the prescribed gains at 50 (low), 65 (medium), and 80 dB SPL (high) input levels. The goodness of fit-to-targets was quantified by calculating the average root-mean-square (RMS) error of the measured gain compared with prescriptive gain targets for 0.5, 1, 2, and 4 kHz. The significance of difference between prescriptions for hearing aid gains, SII, and loudness was examined by performing analyses of variance. Correlation analyses were used to examine the relationship between measures. The DSL v5 prescribed significantly higher overall gain than the NAL-NL1 procedure for the same audiograms. For low and medium input levels, the hearing aids of all children fit with NAL-NL1 were within 5 dB RMS of prescribed targets, but 33% (10 ears) deviated from the DSL v5 targets by more than 5 dB RMS on average. For high input level, the hearing aid fittings of 60% and 43% of ears deviated by more than 5 dB RMS from targets of NAL-NL1 and DSL v5, respectively. Greater deviations from targets were associated with more severe hearing loss. On average, the SII was higher for DSL v5 than for NAL-NL1 at low input level. No significant difference in SII was found between prescriptions at medium or high input level, despite greater loudness for DSL v5 than for NAL-NL1. Although targets between 0.25 and 2 kHz were well matched for both prescriptions in commercial hearing aids, gain targets at 4 kHz were matched for NAL-NL1 only. Although the two prescriptions differ markedly in estimated loudness, they resulted in comparable predicted speech intelligibility for medium and high input levels. American Academy of Audiology.
Oscillator Strengths and Predissociation Widths for Rydberg Transitions in Carbon Monoxide
NASA Technical Reports Server (NTRS)
Federman, Steven R.; Sheffer, Y.; Eidelsberg, Michele; Lemaire, Jean-Louis; Fillion, Jean-Hugues; Rostas, Francois; Ruiz, J.
2006-01-01
CO is used as a probe of astronomical environments ranging from planetary atmospheres and comets to interstellar clouds and the envelopes surrounding stars near the end of their lives. One of the processes controlling the CO abundance and the ratio of its isotopomers is photodissociation. Accurate oscillator strengths for Rydberg transitions are needed for modeling this process. Absorption bands were analyzed by synthesizing the profiles with codes developed independently in Meudon and Toledo. Each synthetic spectrum was adjusted to match the experimental one in a non-linear least-squares fitting procedure with the band oscillator strength, the line width (instrumental and predissociation.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Clark, D Angus; Bowles, Ryan P
2018-04-23
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...
Interpretation of the Coefficients in the Fit y = at + bx + c
ERIC Educational Resources Information Center
Farnsworth, David L.
2006-01-01
The goals of this note are to derive formulas for the coefficients a and b in the least-squares regression plane y = at + bx + c for observations (t[subscript]i,x[subscript]i,y[subscript]i), i = 1, 2, ..., n, and to present meanings for the coefficients a and b. In this note, formulas for the coefficients a and b in the least-squares fit are…
Three Perspectives on Teaching Least Squares
ERIC Educational Resources Information Center
Scariano, Stephen M.; Calzada, Maria
2004-01-01
The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Bootstrapping Least Squares Estimates in Biochemical Reaction Networks
Linder, Daniel F.
2015-01-01
The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769
Phasing via pure crystallographic least squares: an unexpected feature.
Burla, Maria Cristina; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Giacovazzo, Carmelo; Polidori, Giampiero
2018-03-01
Crystallographic least-squares techniques, the main tool for crystal structure refinement of small and medium-size molecules, are for the first time used for ab initio phasing. It is shown that the chief obstacle to such use, the least-squares severe convergence limits, may be overcome by a multi-solution procedure able to progressively recognize and discard model atoms in false positions and to include in the current model new atoms sufficiently close to correct positions. The applications show that the least-squares procedure is able to solve many small structures without the use of important ancillary tools: e.g. no electron-density map is calculated as a support for the least-squares procedure.
Kumar, Keshav; Mishra, Ashok Kumar
2015-07-01
Fluorescence characteristic of 8-anilinonaphthalene-1-sulfonic acid (ANS) in ethanol-water mixture in combination with partial least square (PLS) analysis was used to propose a simple and sensitive analytical procedure for monitoring the adulteration of ethanol by water. The proposed analytical procedure was found to be capable of detecting even small adulteration level of ethanol by water. The robustness of the procedure is evident from the statistical parameters such as square of correlation coefficient (R(2)), root mean square of calibration (RMSEC) and root mean square of prediction (RMSEP) that were found to be well with in the acceptable limits.
Asteroid orbit fitting with radar and angular observations
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2013-12-01
The asteroid orbit fitting problem using their radar and angular observations has been considered. The problem was solved in a standanrd way by means of minimization of weighted sum of squares of residuals. In the orbit fitting both kinds of radar observa-tions have been used: the observations of time delays and of Doppler frequency shifts. The weight for angular observations has been set the same for all of them and has been determined as inverse mean-square residual obtained in the orbit fitting using just angular observations. The weights of radar observations have been set as inverse squared errors of these observations published together with them in the Minor Planet Center electronical circulars (MPECs). For the orbit fitting some five asteroids have been taken from these circulars. The asteroids have been chosen fulfilling the requirement of more than six radar observations of them to be available. The asteroids are 1950 DA, 1999 RQ36, 2002 NY40, 2004 DC and 2005 EU2. Several orbit fittings for these aster-oids have been done: with just angular observations; with just radar observations; with both angular and radar observations. The obtained results are quite acceptable because in the last case the mean-square angular residuals are approximately equal to the same ones obtained in the fitting with just angular observations. As to radar observations mean-square residuals, the time delay residuals for three asteroids do not exceed 1 μs, for two others ˜ 10 μs and the Doppler shift residuals for three asteroids do not exceed 1 Hz, for two others ˜ 10 Hz. The motion equations included perturbations from 9 planets and the Moon using their ephemerides DE422. The numerical integration has been performed with Everhart 27-order method with variable step. All calculations have been exe-cuted to a 34-digit decimal precision (i.e. using 128-bit floating-point numbers). Further, the sizes of confidence ellipsoids of im-proved orbit parameters have been compared. It has been accepted that an indicator of ellipsoid size is a geometric mean of its six semi-axes. A comparison of sizes has shown that confidence ellipsoids obtained in orbit fitting with both angular and radar obser-vations are several times less than ellipsoids obtained with just angular observations.
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants
USDA-ARS?s Scientific Manuscript database
One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...
Computing Robust, Bootstrap-Adjusted Fit Indices for Use with Nonnormal Data
ERIC Educational Resources Information Center
Walker, David A.; Smith, Thomas J.
2017-01-01
Nonnormality of data presents unique challenges for researchers who wish to carry out structural equation modeling. The subsequent SPSS syntax program computes bootstrap-adjusted fit indices (comparative fit index, Tucker-Lewis index, incremental fit index, and root mean square error of approximation) that adjust for nonnormality, along with the…
ISOFIT - A PROGRAM FOR FITTING SORPTION ISOTHERMS TO EXPERIMENTAL DATA
Isotherm expressions are important for describing the partitioning of contaminants in environmental systems. ISOFIT (ISOtherm FItting Tool) is a software program that fits isotherm parameters to experimental data via the minimization of a weighted sum of squared error (WSSE) obje...
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
Transient finite element modeling of functional electrical stimulation.
Filipovic, Nenad D; Peulic, Aleksandar S; Zdravkovic, Nebojsa D; Grbovic-Markovic, Vesna M; Jurisic-Skevin, Aleksandra J
2011-03-01
Transcutaneous functional electrical stimulation is commonly used for strengthening muscle. However, transient effects during stimulation are not yet well explored. The effect of an amplitude change of the stimulation can be described by static model, but there is no differency for different pulse duration. The aim of this study is to present the finite element (FE) model of a transient electrical stimulation on the forearm. Discrete FE equations were derived by using a standard Galerkin procedure. Different tissue conductive and dielectric properties are fitted using least square method and trial and error analysis from experimental measurement. This study showed that FE modeling of electrical stimulation can give the spatial-temporal distribution of applied current in the forearm. Three different cases were modeled with the same geometry but with different input of the current pulse, in order to fit the tissue properties by using transient FE analysis. All three cases were compared with experimental measurements of intramuscular voltage on one volunteer.
A model of the instantaneous pressure-velocity relationships of the neonatal cerebral circulation.
Panerai, R B; Coughtrey, H; Rennie, J M; Evans, D H
1993-11-01
The instantaneous relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), measured with Doppler ultrasound in the anterior cerebral artery, is represented by a vascular waterfall model comprising vascular resistance, compliance, and critical closing pressure. One min recordings obtained from 61 low birth weight newborns were fitted to the model using a least-squares procedures with correction for the time delay between the BP and CBFV signals. A sensitivity analysis was performed to study the effects of low-pass filtering (LPF), cutoff frequency, and noise on the estimated parameters of the model. Results indicate excellent fitting of the model (F-test, p < 0.0001) when the BP and CBFV signals are LPF at 7.5 Hz. Reconstructed CBFV waveforms using the BP signal and the model parameters have a mean correlation coefficient of 0.94 with the measured flow velocity tracing (N = 232 epochs). The model developed can be useful for interpreting clinical findings and as a framework for research into cerebral autoregulation.
XAFS Debye-Waller Factors Temperature-Dependent Expressions for Fe+2-Porphyrin Complexes
NASA Astrophysics Data System (ADS)
Dimakis, Nicholas; Bunker, Grant
2007-02-01
We present an efficient and accurate method for directly calculating single and multiple scattering X-ray absorption fine structure (XAFS) thermal Debye-Waller factors for Fe+2 -porphiryn complexes. The number of multiple scattering Debye-Waller factors on metal porphyrin centers exceeds the number of available parameters that XAFS experimental data can support during fitting with simulated spectra. Using the Density Functional Theory (DFT) under the hybrid functional of X3LYP, phonon normal mode spectrum properties are used to express the mean square variation of the half-scattering path length for a Fe+2 -porphiryn complex as a function of temperature for the most important single and multiple scattering paths of the complex thus virtually eliminating them from the fitting procedure. Modeled calculations are compared with corresponding values obtained from DFT-built and optimized Fe+2 -porphyrin bis-histidine structure as well as from experimental XAFS spectra previously reported. An excellent agreement between calculated and reference Debye-Waller factors for Fe+2-porphyrins is obtained.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
Measurements and modeling of long-path 12CH4 spectra in the 4800-5300 cm-1 region
NASA Astrophysics Data System (ADS)
Nikitin, A. V.; Thomas, X.; Régalia, L.; Daumont, L.; Rey, M.; Tashkun, S. A.; Tyuterev, Vl. G.; Brown, L. R.
2014-05-01
A new study of 12CH4 line positions and intensities was performed for the lower portion of the Tetradecad region between 4800 and 5300 cm-1 using long path (1603 m) spectra of normal sample CH4 at three pressures recorded with the Fourier transform spectrometer in Reims, France. Line positions and intensities were retrieved by least square curve-fitting procedures and analyzed using the effective Hamiltonian and the effective Dipole moment expressed in terms of irreducible tensor operators adapted to spherical top molecules. An existing spectrum of enriched 13CH4 was used to discern the isotopic lines. A new measured linelist produced positions and intensities for 5851 features (a factor of two more than prior work). Assignments were made for 46% of these; 2725 experimental line positions and 1764 selected line intensities were fitted with RMS standard deviations of 0.004 cm-1 and 7.3%, respectively. The RMS of prior intensity fits of the lower Tetradecad was previously a factor of two worse. The sum of observed intensities between 4800 and 5300 cm-1 fell within 5% of the predicted value from variational calculations.
Accurate reconstruction of the thermal conductivity depth profile in case hardened steel
NASA Astrophysics Data System (ADS)
Celorrio, Ricardo; Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Mandelis, Andreas
2010-04-01
The problem of retrieving a nonhomogeneous thermal conductivity profile from photothermal radiometry data is addressed from the perspective of a stabilized least square fitting algorithm. We have implemented an inversion method with several improvements: (a) a renormalization of the experimental data which removes not only the instrumental factor, but the constants affecting the amplitude and the phase as well, (b) the introduction of a frequency weighting factor in order to balance the contribution of high and low frequencies in the inversion algorithm, (c) the simultaneous fitting of amplitude and phase data, balanced according to their experimental noises, (d) a modified Tikhonov regularization procedure has been introduced to stabilize the inversion, and (e) the Morozov discrepancy principle has been used to stop the iterative process automatically, according to the experimental noise, to avoid "overfitting" of the experimental data. We have tested this improved method by fitting theoretical data generated from a known conductivity profile. Finally, we have applied our method to real data obtained in a hardened stainless steel plate. The reconstructed in-depth thermal conductivity profile exhibits low dispersion, even at the deepest locations, and is in good anticorrelation with the hardness indentation test.
Analysis of the Zeeman effect on D α spectra on the EAST tokamak
NASA Astrophysics Data System (ADS)
Gao, Wei; Huang, Juan; Wu, Chengrui; Xu, Zong; Hou, Yumei; Jin, Zhao; Chen, Yingjie; Zhang, Pengfei; Zhang, Ling; Wu, Zhenwei; EAST Team
2017-04-01
Based on the passive spectroscopy, the {{{D}}}α atomic emission spectra in the boundary region of the plasma have been measured by a high resolution optical spectroscopic multichannel analysis (OSMA) system in EAST tokamak. The Zeeman splitting of the {{{D}}}α spectral lines has been observed. A fitting procedure by using a nonlinear least squares method was applied to fit and analyze all polarization π and +/- σ components of the {{{D}}}α atomic spectra to acquire the information of the local plasma. The spectral line shape was investigated according to emission spectra from different regions (e.g., low-field side and high-field side) along the viewing chords. Each polarization component was fitted and classified into three energy categories (the cold, warm, and hot components) based on different atomic production processes, in consistent with the transition energy distribution by calculating the gradient of the {{{D}}}α spectral profile. The emission position, magnetic field intensity, and flow velocity of a deuterium atom were also discussed in the context. Project supported by the National Natural Science Foundation of China (Grant Nos. 11275231 and 11575249) and the National Magnetic Confinement Fusion Energy Research Program of China (Grant No. 2015GB110005).
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
...'') U.S. affiliated importer FitMAX Inc. (``FitMAX'') on June 2, 2010 and June 16, 2010. FitMAX... carbon- quality light-walled steel pipe and tube, of rectangular (including square) cross section, having...
A Graphic Chi-Square Test For Two-Class Genetic Segregation Ratios
A.E. Squillace; D.J. Squillace
1970-01-01
A chart is presented for testing the goodness of fit of observed two-class genetic segregation ratios against hypothetical ratios, eliminating the need of computing chi-square. Although designed mainly for genetic studies, the chart can also be used for other types of studies involving two-class chi-square tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...
Code of Federal Regulations, 2013 CFR
2013-10-01
... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...
Code of Federal Regulations, 2014 CFR
2014-10-01
... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
49 CFR 385.119 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Mexico-Domiciled Carriers § 385.119 Applicability of safety fitness and enforcement procedures. At all times during which a Mexico-domiciled motor...
49 CFR 385.717 - Applicability of safety fitness and enforcement procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Applicability of safety fitness and enforcement... REGULATIONS SAFETY FITNESS PROCEDURES Safety Monitoring System for Non-North American Carriers § 385.717 Applicability of safety fitness and enforcement procedures. At all times during which a non-North America...
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
On Nonequivalence of Several Procedures of Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2005-01-01
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…
Tensor hypercontraction. II. Least-squares renormalization
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David
2012-12-01
The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.
Tensor hypercontraction. II. Least-squares renormalization.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2012-12-14
The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
How many spectral lines are statistically significant?
NASA Astrophysics Data System (ADS)
Freund, J.
When experimental line spectra are fitted with least squares techniques one frequently does not know whether n or n + 1 lines may be fitted safely. This paper shows how an F-test can be applied in order to determine the statistical significance of including an extra line into the fitting routine.
[Analysis of the impact of job characteristics and organizational support for workplace violence].
Li, M L; Chen, P; Zeng, F H; Cui, Q L; Zeng, J; Zhao, X S; Li, Z N
2017-12-20
Objective: To analyze the effect of job characteristics and organizational support for workplace violence, explore the influence path and the theoretical model, and provide a theoretical basis for reducing workplace violence. Methods: Stratified random sampling was used to select 813 medical staff, conductors and bus drivers in Chongqing with a self-made questionnaire to investigate job characteristics, organization attitude toward workplace violence, workplace violence, fear of violence, workplace violence, etc from February to October, 2014. Amos 21.0 was used to analyze the path and to establish a theoretical model of workplace violence. Results: The odds ratio of work characteristics and organizational attitude to workplace violence were 6.033 and 0.669, respectively, and the path coefficients were 0.41 and-0.14, respectively ( P <0.05). The Fitting indexes of the model: Chi-square (χ(2)) =67.835, The ratio of the chi-square to the degree of freedom (χ(2)/df) =5.112, Good-of-fit index (GFI) =0.970, Adjusted good-of-fit index (AGFI) =0.945, Normed fit index (NFI) =0.923, Root mean square error of approximation (RMSEA) =0.071, Fit criterion (Fmin) =0.092, so the model fit well with the data. Conclusion: The job characteristic is a risk factor for workplace violence while organizational attitude is a protective factor for workplace violence, so changing the job characteristics and improving the enthusiasm of the organization to deal with workplace violence are conducive to reduce workplace violence and increase loyalty to the unit.
NASA Astrophysics Data System (ADS)
Espinoza, Néstor; Jordán, Andrés
2016-04-01
Very precise measurements of exoplanet transit light curves both from ground- and space-based observatories make it now possible to fit the limb-darkening coefficients in the transit-fitting procedure rather than fix them to theoretical values. This strategy has been shown to give better results, as fixing the coefficients to theoretical values can give rise to important systematic errors which directly impact the physical properties of the system derived from such light curves such as the planetary radius. However, studies of the effect of limb-darkening assumptions on the retrieved parameters have mostly focused on the widely used quadratic limb-darkening law, leaving out other proposed laws that are either simpler or better descriptions of model intensity profiles. In this work, we show that laws such as the logarithmic, square-root and three-parameter law do a better job that the quadratic and linear laws when deriving parameters from transit light curves, both in terms of bias and precision, for a wide range of situations. We therefore recommend to study which law to use on a case-by-case basis. We provide code to guide the decision of when to use each of these laws and select the optimal one in a mean-square error sense, which we note has a dependence on both stellar and transit parameters. Finally, we demonstrate that the so-called exponential law is non-physical as it typically produces negative intensities close to the limb and should therefore not be used.
Reconnaissance On Chi-Square Test Procedure For Determining Two Species Association
NASA Astrophysics Data System (ADS)
Marisa, Hanifa
2008-01-01
Determining the assosiation of two species by using chi-square test has been published. Utility of this procedure to plants species at certain location, shows that the procedure could not find "ecologically" association. Tens sampling units have been made to record some weeds species in Indralaya, South Sumatera. Chi square test; Xt2 = N[|(ad)-(bc)|-(N/2)]2/mnrs (Eq:1) on two species (Cleome sp and Eleusine indica) of the weeds shows positive assosiation; while ecologically in nature, there is no relationship between them. Some alternatives are proposed to this problem; simplified chi-square test steps, make further study to find out ecologically association, or at last, ignore it.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Single-level resonance parameters fit nuclear cross-sections
NASA Technical Reports Server (NTRS)
Drawbaugh, D. W.; Gibson, G.; Miller, M.; Page, S. L.
1970-01-01
Least squares analyses of experimental differential cross-section data for the U-235 nucleus have yielded single level Breit-Wigner resonance parameters that fit, simultaneously, three nuclear cross sections of capture, fission, and total.
Fitting Orbits to Jupiter's Moons with a Spreadsheet.
ERIC Educational Resources Information Center
Bridges, Richard
1995-01-01
Describes how a spreadsheet is used to fit a circular orbit model to observations of Jupiter's moons made with a small telescope. Kepler's Third Law and the inverse square law of gravity are observed. (AIM)
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Johnson square procedure for lentigo maligna and lentigo maligna melanoma.
Patel, A N; Perkins, W; Leach, I H; Varma, S
2014-07-01
Lentigo maligna (LM) and lentigo maligna melanoma (LMM) can be difficult to manage surgically. Predetermined margins can be inadequate because of subclinical spread, or can affect function when margins are adjacent to the eye or mouth. To describe our 5-year experience in Nottingham of using the staged square procedure (Johnson square) in excising difficult facial LM and LMM. The square procedure is a staged technique useful for ill-defined lesions and for lesions that have a high recurrence rate due to subclinical spread. It uses paraffin wax-embedded peripheral vertical sections for margin control, ensuring complete clearance as the surgical margins are usually examined at distances of 2-5 mm from the periphery of the lesion. We treated 21 patients with LM or LMM with the staged square procedure over a 5-year period. Of the 21 patients, 10 needed only one stage of surgery, 6 needed two stages, 3 needed three stages and 2 needed four stages. To date, there has been only one recurrence, which was of an extensive lesion that crossed the medial canthus, making margin control impossible because of the anatomical limitations. The staged square procedure is an effective treatment for LM and LMM. It attempts to conserve tissue while ensuring a higher clearance rate. This offers favourable cosmetic outcomes and better prognosis, especially for facial LM and LMM. © 2014 British Association of Dermatologists.
A Generic Nonlinear Aerodynamic Model for Aircraft
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2014-01-01
A generic model of the aerodynamic coefficients was developed using wind tunnel databases for eight different aircraft and multivariate orthogonal functions. For each database and each coefficient, models were determined using polynomials expanded about the state and control variables, and an othgonalization procedure. A predicted squared-error criterion was used to automatically select the model terms. Modeling terms picked in at least half of the analyses, which totalled 45 terms, were retained to form the generic nonlinear aerodynamic (GNA) model. Least squares was then used to estimate the model parameters and associated uncertainty that best fit the GNA model to each database. Nonlinear flight simulations were used to demonstrate that the GNA model produces accurate trim solutions, local behavior (modal frequencies and damping ratios), and global dynamic behavior (91% accurate state histories and 80% accurate aerodynamic coefficient histories) under large-amplitude excitation. This compact aerodynamics model can be used to decrease on-board memory storage requirements, quickly change conceptual aircraft models, provide smooth analytical functions for control and optimization applications, and facilitate real-time parametric system identification.
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models
ERIC Educational Resources Information Center
Liu, Yanlou; Tian, Wei; Xin, Tao
2016-01-01
The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…
Application of least-squares fitting of ellipse and hyperbola for two dimensional data
NASA Astrophysics Data System (ADS)
Lawiyuniarti, M. P.; Rahmadiantri, E.; Alamsyah, I. M.; Rachmaputri, G.
2018-01-01
Application of the least-square method of ellipse and hyperbola for two-dimensional data has been applied to analyze the spatial continuity of coal deposits in the mining field, by using the fitting method introduced by Fitzgibbon, Pilu, and Fisher in 1996. This method uses 4{a_0}{a_2} - a_12 = 1 as a constrain function. Meanwhile, in 1994, Gander, Golub and Strebel have introduced ellipse and hyperbola fitting methods using the singular value decomposition approach. This SVD approach can be generalized into a three-dimensional fitting. In this research we, will discuss about those two fitting methods and apply it to four data content of coal that is in the form of ash, calorific value, sulfur and thickness of seam so as to produce form of ellipse or hyperbola. In addition, we compute the error difference resulting from each method and from that calculation, we conclude that although the errors are not much different, the error of the method introduced by Fitzgibbon et al is smaller than the fitting method that introduced by Golub et al.
Relative crater production rates on planets
NASA Technical Reports Server (NTRS)
Hartmann, W. K.
1977-01-01
The relative numbers of impacts on different planets, estimated from the dynamical histories of planetesimals in specified orbits (Wetherill, 1975), are converted by a described procedure to crater production rates. Conversions are dependent on impact velocity and surface gravity. Crater retention ages can then be derived from the ratio of the crater density to the crater production rate. The data indicate that the terrestrial planets have crater production rates within a factor ten of each other. As an example, for the case of Mars, least-squares fits to crater-count data suggest an average age of 0.3 to 3 billion years for two types of channels. The age of Olympus Mons is discussed, and the effect of Tharsis volcanism on channel formation is considered.
Diffusion in liquid metal systems. [information on electrical resistivity and thermal conductivity
NASA Technical Reports Server (NTRS)
Ukanwa, A. O.
1975-01-01
Physical properties of twenty liquid metals are reported; some of the data on such liquid metal properties as density, electrical resistivity, thermal conductivity, and heat capacity are summarized in graphical form. Data on laboratory handling and safety procedure are summarized for each metal; heat-transfer-correlations for liquid metals under various conditions of laminar and turbulent flow are included. Where sufficient data were available, temperature equations of properties were obtained by the method of least-squares fit. All values of properties given are valid in the given liquid phase ranges only. Additional tabular data on some 40 metals are reported in the appendix. Included is a brief description of experiments that were performed to investigate diffusion in liquid indium-gallium systems.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.
Analysis of positron lifetime spectra in polymers
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.
1988-01-01
A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.
An efficient parallel-processing method for transposing large matrices in place.
Portnoff, M R
1999-01-01
We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.
NASA Technical Reports Server (NTRS)
Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.
1987-01-01
The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.
Characterizing the 21-cm absorption trough with pattern recognition and a numerical sampler
NASA Astrophysics Data System (ADS)
Tauscher, Keith A.; Rapetti, David; Burns, Jack O.; Monsalve, Raul A.; Bowman, Judd D.
2018-06-01
The highly redshifted sky-averaged 21-cm spectrum from neutral hydrogen is a key probe to a period of the Universe never before studied. Recent experimental advances have led to increasingly tightened constraints and the Experiment to Detect the Global Eor Signal (EDGES) has presented evidence for a detection of this global signal. In order to glean scientifically valuable information from these new measurements in a consistent manner, sophisticated fitting procedures must be applied. Here, I present a pipeline known as pylinex which takes advantage of Singular Value Decomposition (SVD), a pattern recognition tool, to leverage structure in the data induced by the design of an experiment to fit for signals in the experiment's data in the presence of large systematics (such as the beam-weighted foregrounds), especially those without parametric forms. This method requires training sets for each component of the data. Once the desired signal is extracted in SVD eigenmode coefficient space, the posterior distribution must be consistently transformed into a physical parameter space. This is done with the combination of a numerical least squares fitter and a Markov Chain Monte Carlo (MCMC) distribution sampler. After describing the pipeline's procedures and techniques, I present preliminary results of applying it to the EDGES low-band data used for their detection. The results include estimates of the signal in frequency space with errors and relevant parameter distributions.
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
Wolfe, Edward W; McGill, Michael T
2011-01-01
This article summarizes a simulation study of the performance of five item quality indicators (the weighted and unweighted versions of the mean square and standardized mean square fit indices and the point-measure correlation) under conditions of relatively high and low amounts of missing data under both random and conditional patterns of missing data for testing contexts such as those encountered in operational administrations of a computerized adaptive certification or licensure examination. The results suggest that weighted fit indices, particularly the standardized mean square index, and the point-measure correlation provide the most consistent information between random and conditional missing data patterns and that these indices perform more comparably for items near the passing score than for items with extreme difficulty values.
ERIC Educational Resources Information Center
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
ERIC Educational Resources Information Center
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador
NASA Astrophysics Data System (ADS)
Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.
2017-06-01
Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.
NASA Astrophysics Data System (ADS)
Trivedi, C. M.; Rana, V. A.; Hudge, P. G.; Kumbharkhane, A. C.
2016-08-01
Complex permittivity spectra of binary mixtures of varying concentrations of β-picoline and Methanol (MeOH) have been obtained using time domain reflectometry (TDR) technique over frequency range 10 MHz to 25 GHz at 283.15, 288.15, 293.15 and 298.15 K temperatures. The dielectric relaxation parameters namely static permittivity (ɛ0), high frequency limit permittivity (ɛ∞1) and the relaxation time (τ) were determined by fitting complex permittivity data to the single Debye/Cole-Davidson model. Complex nonlinear least square (CNLS) fitting procedure was carried out using LEVMW software. The excess permittivity (ɛ0E) and the excess inverse relaxation time (1/τ)E which contain information regarding molecular structure and interaction between polar-polar liquids were also determined. From the experimental data, parameters such as effective Kirkwood correlation factor (geff), Bruggeman factor (fB) and some thermo dynamical parameters have been calculated. Excess parameters were fitted to the Redlich-Kister polynomial equation. The values of static permittivity and relaxation time increase nonlinearly with increase in the mol-fraction of MeOH at all temperatures. The values of excess static permittivity (ɛ0E) and the excess inverse relaxation time (1/τ)E are negative for the studied β-picoline — MeOH system at all temperatures.
A COUPLED 2 × 2D BABCOCK–LEIGHTON SOLAR DYNAMO MODEL. I. SURFACE MAGNETIC FLUX EVOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemerle, Alexandre; Charbonneau, Paul; Carignan-Dugas, Arnaud, E-mail: lemerle@astro.umontreal.ca, E-mail: paulchar@astro.umontreal.ca
The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model’s key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returnsmore » Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics.« less
Squared eigenfunctions for the Sasa-Satsuma equation
NASA Astrophysics Data System (ADS)
Yang, Jianke; Kaup, D. J.
2009-02-01
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.
He, Jinbo; Zhu, Hong; Luo, Xingwei; Cai, Taisheng; Wu, Siyao; Lu, Yao
2016-06-01
The Impact of Weight on Quality of Life for Kids (IWQOL-Kids) is the first self-report questionnaire for assessing weight-related quality of life for youth. However, there is no Chinese version of IWQOL-Kids. Thus, the objective of this research was to translate IWQOL-Kids into Mandarin and evaluate its psychometric properties in a large school-based sample. The total sample included 2282 participants aged 11-18 years old, including 1703 non-overweight, 386 overweight and 193 obese students. IWQOL-Kids was translated and culturally adapted by following the international guidelines for instrument linguistic validation procedures. The psychometric evaluation included internal consistency, test-retest reliability, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), convergent validity and discriminant validity. Cronbach's α for the Chinese version of IWQOL-Kids (IWQOL-Kids-C) was 0.956 and ranged from 0.891 to 0.927 for subscales. IWQOL-Kids-C showed a test-retest coefficient of 0.937 after 2 weeks and ranged from 0.847 to 0.903 for subscales. The original four-factor model was reproduced by EFA after seven iterations, accounting for 69.28% of the total variance. CFA demonstrated that the four-factor model had good fit indices with comparative fit index = 0.92, normed fit index = 0.91, goodness of fit index = 0.86, root mean square error of approximation = 0.07 and root mean square residual = 0.03. Convergent validity and discriminant validity were demonstrated with higher correlations between similar constructs and lower correlations between dissimilar constructs of IWQOL-Kids-C and PedsQL™ 4.0. The significant differences were found across the body mass index groups, and IWQOL-Kids-C had higher effect sizes than PedsQL™4.0 when comparing non-overweight and obese groups, supporting the sensitivity of IWQOL-Kids-C. IWQOL-Kids-C is a satisfactory, valid and reliable instrument to assess weight-related quality of life for Chinese children and adolescents aged 11-18 years old. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L
2009-04-14
We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.
Faraday rotation data analysis with least-squares elliptical fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Adam D.; McHale, G. Brent; Goerz, David A.
2010-10-15
A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, V.P.; Kobayashi, R.
1988-02-01
Infinite-dilution fugacity coefficients were obtained for the system fluorene/phenanthrene at thirteen temperatures by fitting total pressure across the entire mole fraction range by a computer routine. A thermodynamically consistent routine, that allowed for both positive and negative pressure deviations from the ideal values, was used to correlate data over the full mole fraction range from 0 to 1. The four-suffix Margules activity coefficient model without modification essentially served this purpose since total pressures and total pressure derivatives with respect to mole fraction were negligible compared to pressure measurement precision. The water/ethanol system and binary systems comprised of aniline, chlorobenzene, acetonitrilemore » and other polar compounds were fit for total pressure across the entire mole fraction range for binary Vapor-Liquid-Equilbria (VLE) using the rigorous, thermodynamically consistent Gibbs-Duhem Relation derived by Ibl and Dodge. Data correlation was performed using a computer least squares procedure. Infinite-dilution fugacity coefficients were obtained using a modified Margules activity coefficient model.« less
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
Nørrelykke, Simon F; Flyvbjerg, Henrik
2010-07-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.
Navy Fuel Composition and Screening Tool (FCAST) v2.8
2016-05-10
allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for
Kim, Sun Jung; Yoo, Il Young
2016-03-01
The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.
Determining a Prony Series for a Viscoelastic Material From Time Varying Strain Data
NASA Technical Reports Server (NTRS)
Tzikang, Chen
2000-01-01
In this study a method of determining the coefficients in a Prony series representation of a viscoelastic modulus from rate dependent data is presented. Load versus time test data for a sequence of different rate loading segments is least-squares fitted to a Prony series hereditary integral model of the material tested. A nonlinear least squares regression algorithm is employed. The measured data includes ramp loading, relaxation, and unloading stress-strain data. The resulting Prony series which captures strain rate loading and unloading effects, produces an excellent fit to the complex loading sequence.
NASA Astrophysics Data System (ADS)
Wolkenberg, Andrzej; Przeslawski, Tomasz
1996-04-01
Galvanomagnetic measurements were performed on the square shaped samples after Van der Pauw and on the Hall bar at low electric fields app. 1.5 V/cm and magnetic induction app. 6 kG in order to make a comparison between the theoretical and experimental results of the temperature dependence of mobility and resistivity from 70 K to 300 K. A calculation method was obtained of the drift mobility and the Hall mobility in which the scatterings are applied: on ionized impurities, on polar optical phonons, on acoustic phonons (deformation potential), on acoustic phonons (piezoelectric potential) and on dislocations. The elaborated method transformed to a computer program allows us to fit experimental values of the resistivity and the Hall mobility to those calculated. The fitting procedure makes it possible to characterize the quality of the n-type GaAs MBE layer, i.e. the net electron concentration, whole ionized impurities concentration and dislocation density after Read space charge cylinders model. The calculations together with the measurements allow us to obtain compensation ratio value in the layer, too. The influence of the epitaxial layer thickness on layers measurements accuracy in the case of Van der Pauw square probe was investigated. It was stated that in the layers under 3 micrometer the bulk properties are strongly influenced by both surfaces. The results of measurements of the same layer using the Van der Pauw and the Hall bar structure were compared. It was stated that the Hall bar structure only could be used to obtain proper measurements results.
NASA Astrophysics Data System (ADS)
Lu-Lu, Zhang; Yu-Zhi, Song; Shou-Bao, Gao; Yuan, Zhang; Qing-Tian, Meng
2016-05-01
A globally accurate single-sheeted double many-body expansion potential energy surface is reported for the first excited state of HS2 by fitting the accurate ab initio energies, which are calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set. By using the double many-body expansion-scaled external correlation method, such calculated ab initio energies are then slightly corrected by scaling their dynamical correlation. A grid of 2767 ab initio energies is used in the least-square fitting procedure with the total root-mean square deviation being 1.406 kcal·mol-1. The topographical features of the HS2(A2A‧) global potential energy surface are examined in detail. The attributes of the stationary points are presented and compared with the corresponding ab initio results as well as experimental and other theoretical data, showing good agreement. The resulting potential energy surface of HS2(A2A‧) can be used as a building block for constructing the global potential energy surfaces of larger S/H molecular systems and recommended for dynamic studies on the title molecular system. Project supported by the National Natural Science Foundation of China (Grant No. 11304185), the Taishan Scholar Project of Shandong Province, China, the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014AM022), the Shandong Province Higher Educational Science and Technology Program, China (Grant No. J15LJ03), the China Postdoctoral Science Foundation (Grant No. 2014M561957), and the Post-doctoral Innovation Project of Shandong Province, China (Grant No. 201402013).
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2012 CFR
2012-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2014 CFR
2014-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2013 CFR
2013-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2010 CFR
2010-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
14 CFR 302.211 - Procedures in certificate cases involving initial or continuing fitness.
Code of Federal Regulations, 2011 CFR
2011-01-01
... initial or continuing fitness. 302.211 Section 302.211 Aeronautics and Space OFFICE OF THE SECRETARY... Disposition of Applications § 302.211 Procedures in certificate cases involving initial or continuing fitness... applicant's fitness to operate. Where such applications propose the operation of scheduled service in...
Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians
NASA Astrophysics Data System (ADS)
del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo
1995-06-01
A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
Polyomino Problems to Confuse Computers
ERIC Educational Resources Information Center
Coffin, Stewart
2009-01-01
Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…
NASA Astrophysics Data System (ADS)
Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng
2018-05-01
Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.
2009-07-16
0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation
ERIC Educational Resources Information Center
Pye, Cory C.; Mercer, Colin J.
2012-01-01
The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)
Confirmatory factor analysis of the female sexual function index.
Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R
2013-01-01
The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Lamontagne, Jonathan R.; Stedinger, Jery R.; Berenbrock, Charles; Veilleux, Andrea G.; Ferris, Justin C.; Knifong, Donna L.
2012-01-01
Flood-frequency information is important in the Central Valley region of California because of the high risk of catastrophic flooding. Most traditional flood-frequency studies focus on peak flows, but for the assessment of the adequacy of reservoirs, levees, other flood control structures, sustained flood flow (flood duration) frequency data are needed. This study focuses on rainfall or rain-on-snow floods, rather than the annual maximum, because rain events produce the largest floods in the region. A key to estimating flood-duration frequency is determining the regional skew for such data. Of the 50 sites used in this study to determine regional skew, 28 sites were considered to have little to no significant regulated flows, and for the 22 sites considered significantly regulated, unregulated daily flow data were synthesized by using reservoir storage changes and diversion records. The unregulated, annual maximum rainfall flood flows for selected durations (1-day, 3-day, 7-day, 15-day, and 30-day) for all 50 sites were furnished by the U.S. Army Corps of Engineers. Station skew was determined by using the expected moments algorithm program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual flood-duration data. Bayesian generalized least squares regression procedures used in earlier studies were modified to address problems caused by large cross correlations among concurrent rainfall floods in California and to address the extensive censoring of low outliers at some sites, by using the new expected moments algorithm for fitting the LP3 distribution to rainfall flood-duration data. To properly account for these problems and to develop suitable regional-skew regression models and regression diagnostics, a combination of ordinary least squares, weighted least squares, and Bayesian generalized least squares regressions were adopted. This new methodology determined that a nonlinear model relating regional skew to mean basin elevation was the best model for each flood duration. The regional-skew values ranged from -0.74 for a flood duration of 1-day and a mean basin elevation less than 2,500 feet to values near 0 for a flood duration of 7-days and a mean basin elevation greater than 4,500 feet. This relation between skew and elevation reflects the interaction of snow and rain, which increases with increased elevation. The regional skews are more accurate, and the mean squared errors are less than in the Interagency Advisory Committee on Water Data's National skew map of Bulletin 17B.
Three-dimensional analysis of surface crack-Hertzian stress field interaction
NASA Technical Reports Server (NTRS)
Ballarini, R.; Hsu, Y.
1989-01-01
The results are presented of a stress intensity factor analysis of semicircular surface cracks in the inner raceway of an engine bearing. The loading consists of a moving spherical Hertzian contact load and an axial stress due to rotation and shrink fit. A 3-D linear elastic Boundary Element Method code was developed to perform the stress analysis. The element library includes linear and quadratic isoparametric surface elements. Singular quarter point elements were employed to capture the square root displacement variation and the inverse square root stress singularity along the crack front. The program also possesses the capability to separate the whole domain into two subregions. This procedure enables one to solve nonsymmetric fracture mechanics problems without having to separate the crack surfaces a priori. A wide range of configuration parameters was investigated. The ratio of crack depth to bearing thickness was varied from one-sixtieth to one-fifth for several different locations of the Hertzian load. The stress intensity factors for several crack inclinations were also investigated. The results demonstrate the efficiency and accuracy of the Boundary Element Method. Moreover, the results can provide the basis for crack growth calculations and fatigue life prediction.
NASA Astrophysics Data System (ADS)
De Beuckeleer, Liene I.; Herrebout, Wouter A.
2016-02-01
To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Volta, Carlo A; Marangoni, Elisabetta; Alvisi, Valentina; Capuzzo, Maurizia; Ragazzi, Riccardo; Pavanelli, Lina; Alvisi, Raffaele
2002-01-01
Although computerized methods of analyzing respiratory system mechanics such as the least squares fitting method have been used in various patient populations, no conclusive data are available in patients with chronic obstructive pulmonary disease (COPD), probably because they may develop expiratory flow limitation (EFL). This suggests that respiratory mechanics be determined only during inspiration. Eight-bed multidisciplinary ICU of a teaching hospital. Eight non-flow-limited postvascular surgery patients and eight flow-limited COPD patients. Patients were sedated, paralyzed for diagnostic purposes, and ventilated in volume control ventilation with constant inspiratory flow rate. Data on resistance, compliance, and dynamic intrinsic positive end-expiratory pressure (PEEPi,dyn) obtained by applying the least squares fitting method during inspiration, expiration, and the overall breathing cycle were compared with those obtained by the traditional method (constant flow, end-inspiratory occlusion method). Our results indicate that (a) the presence of EFL markedly decreases the precision of resistance and compliance values measured by the LSF method, (b) the determination of respiratory variables during inspiration allows the calculation of respiratory mechanics in flow limited COPD patients, and (c) the LSF method is able to detect the presence of PEEPi,dyn if only inspiratory data are used.
Computer-assisted map projection research
Snyder, John Parr
1985-01-01
Computers have opened up areas of map projection research which were previously too complicated to utilize, for example, using a least-squares fit to a very large number of points. One application has been in the efficient transfer of data between maps on different projections. While the transfer of moderate amounts of data is satisfactorily accomplished using the analytical map projection formulas, polynomials are more efficient for massive transfers. Suitable coefficients for the polynomials may be determined more easily for general cases using least squares instead of Taylor series. A second area of research is in the determination of a map projection fitting an unlabeled map, so that accurate data transfer can take place. The computer can test one projection after another, and include iteration where required. A third area is in the use of least squares to fit a map projection with optimum parameters to the region being mapped, so that distortion is minimized. This can be accomplished for standard conformal, equalarea, or other types of projections. Even less distortion can result if complex transformations of conformal projections are utilized. This bulletin describes several recent applications of these principles, as well as historical usage and background.
Predicting First Traversal Times for Virions and Nanoparticles in Mucus with Slowed Diffusion
Erickson, Austen M.; Henry, Bruce I.; Murray, John M.; Klasse, Per Johan; Angstmann, Christopher N.
2015-01-01
Particle-tracking experiments focusing on virions or nanoparticles in mucus have measured mean-square displacements and reported diffusion coefficients that are orders of magnitude smaller than the diffusion coefficients of such particles in water. Accurate description of this subdiffusion is important to properly estimate the likelihood of virions traversing the mucus boundary layer and infecting cells in the epithelium. However, there are several candidate models for diffusion that can fit experimental measurements of mean-square displacements. We show that these models yield very different estimates for the time taken for subdiffusive virions to traverse through a mucus layer. We explain why fits of subdiffusive mean-square displacements to standard diffusion models may be misleading. Relevant to human immunodeficiency virus infection, using computational methods for fractional subdiffusion, we show that subdiffusion in normal acidic mucus provides a more effective barrier against infection than previously thought. By contrast, the neutralization of the mucus by alkaline semen, after sexual intercourse, allows virions to cross the mucus layer and reach the epithelium in a short timeframe. The computed barrier protection from fractional subdiffusion is some orders of magnitude greater than that derived by fitting standard models of diffusion to subdiffusive data. PMID:26153713
Using Least Squares to Solve Systems of Equations
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2016-01-01
The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.
ERIC Educational Resources Information Center
Knol, Dirk L.; ten Berge, Jos M. F.
1989-01-01
An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
The Evaluation and Selection of Adequate Causal Models: A Compensatory Education Example.
ERIC Educational Resources Information Center
Tanaka, Jeffrey S.
1982-01-01
Implications of model evaluation (using traditional chi square goodness of fit statistics, incremental fit indices for covariance structure models, and latent variable coefficients of determination) on substantive conclusions are illustrated with an example examining the effects of participation in a compensatory education program on posttreatment…
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
Electrostatic point charge fitting as an inverse problem: Revealing the underlying ill-conditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Maxim V.; Talipov, Marat R.; Timerghazin, Qadir K., E-mail: qadir.timerghazin@marquette.edu
2015-10-07
Atom-centered point charge (PC) model of the molecular electrostatics—a major workhorse of the atomistic biomolecular simulations—is usually parameterized by least-squares (LS) fitting of the point charge values to a reference electrostatic potential, a procedure that suffers from numerical instabilities due to the ill-conditioned nature of the LS problem. To reveal the origins of this ill-conditioning, we start with a general treatment of the point charge fitting problem as an inverse problem and construct an analytical model with the point charges spherically arranged according to Lebedev quadrature which is naturally suited for the inverse electrostatic problem. This analytical model is contrastedmore » to the atom-centered point-charge model that can be viewed as an irregular quadrature poorly suited for the problem. This analysis shows that the numerical problems of the point charge fitting are due to the decay of the curvatures corresponding to the eigenvectors of LS sum Hessian matrix. In part, this ill-conditioning is intrinsic to the problem and is related to decreasing electrostatic contribution of the higher multipole moments, that are, in the case of Lebedev grid model, directly associated with the Hessian eigenvectors. For the atom-centered model, this association breaks down beyond the first few eigenvectors related to the high-curvature monopole and dipole terms; this leads to even wider spread-out of the Hessian curvature values. Using these insights, it is possible to alleviate the ill-conditioning of the LS point-charge fitting without introducing external restraints and/or constraints. Also, as the analytical Lebedev grid PC model proposed here can reproduce multipole moments up to a given rank, it may provide a promising alternative to including explicit multipole terms in a force field.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Defining surfaces for skewed, highly variable data
Helsel, D.R.; Ryker, S.J.
2002-01-01
Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.
Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications
Qian, Guoqi; Wu, Yuehua; Ferrari, Davide; Qiao, Puxue; Hollande, Frédéric
2016-01-01
Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method. PMID:27212939
Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio
NASA Astrophysics Data System (ADS)
Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.
2017-12-01
Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.
2014-01-01
Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-12-01
This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.
A method for cone fitting based on certain sampling strategy in CMM metrology
NASA Astrophysics Data System (ADS)
Zhang, Li; Guo, Chaopeng
2018-04-01
A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.
Makkai, Géza; Buzády, Andrea; Erostyák, János
2010-01-01
Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.
Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N
2015-03-01
Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available.
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
ERIC Educational Resources Information Center
Savalei, Victoria
2012-01-01
The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vila, Fernando D.; Rehr, John J.; Nuzzo, Ralph G.
Supported Pt nanocatalysts generally exhibit anomalous behavior, including negative thermal expansion and large structural disorder. Finite temperature DFT/MD simulations reproduce these properties, showing that they are largely explained by a combination of thermal vibrations and low-frequency disorder. We show in this paper that a full interpretation is more complex and that the DFT/MD mean-square relative displacements (MSRD) can be further separated into vibrational disorder, “dynamic structural disorder” (DSD), and long-time equilibrium fluctuations of the structure dubbed “anomalous structural disorder” (ASD). We find that the vibrational and DSD components behave normally, increasing linearly with temperature while the ASD decreases, reflecting themore » evolution of mean nanoparticle geometry. Finally, as a consequence the usual procedure of fitting the MSRD to normal vibrations plus temperature-independent static disorder results in unphysical bond strengths and Grüneisen parameters.« less
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Hot, cold, and annual reference atmospheres for Edwards Air Force Base, California (1975 version)
NASA Technical Reports Server (NTRS)
Johnson, D. L.
1975-01-01
Reference atmospheres pertaining to summer (hot), winter (cold), and mean annual conditions for Edwards Air Force Base, California, are presented from surface to 90 km altitude (700 km for the annual model). Computed values of pressure, kinetic temperature, virtual temperature, and density and relative differences percentage departure from the Edwards reference atmospheres, 1975 (ERA-75) of the atmospheric parameters versus altitude are tabulated in 250 m increments. Hydrostatic and gas law equations were used in conjunction with radiosonde and rocketsonde thermodynamic data in determining the vertical structure of these atmospheric models. The thermodynamic parameters were all subjected to a fifth degree least-squares curve-fit procedure, and the resulting coefficients were incorporated into Univac 1108 computer subroutines so that any quantity may be recomputed at any desired altitude using these subroutines.
Correction factors for on-line microprobe analysis of multielement alloy systems
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.; Brewer, W. D.
1977-01-01
An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Vergara-Romero, Manuel; Morales-Asencio, José Miguel; Morales-Fernández, Angelines; Canca-Sanchez, Jose Carlos; Rivas-Ruiz, Francisco; Reinaldo-Lapuerta, Jose Antonio
2017-06-07
Preoperative anxiety is a frequent and challenging problem with deleterious effects on the development of surgical procedures and postoperative outcomes. To prevent and treat preoperative anxiety effectively, the level of anxiety of patients needs to be assessed through valid and reliable measuring instruments. One such measurement tool is the Amsterdam Preoperative Anxiety and Information Scale (APAIS), of which a Spanish version has not been validated yet. To perform a Spanish cultural adaptation and empirical validation of the APAIS for assessing preoperative anxiety in the Spanish population. A two-step forward/back translation of the APAIS scale was performed to ensure a reliable Spanish cultural adaptation. The final Spanish version of the APAIS questionnaire was administered to 529 patients between the ages of 18 to 70 undergoing elective surgery at hospitals of the Agencia Sanitaria Costa del Sol (Spain). Cronbach's alpha, homogeneity index, intra-class correlation coefficient, and confirmatory factor analysis were calculated to assess internal consistency and criteria and construct validity. Confirmatory factor analysis showed that a one-factor model was better fitted than a two-factor model, with good fitting patterns (root mean square error of approximation: 0.05, normed-fit index: 0.99, goodness-of-fit statistic: 0.99). The questionnaire showed high internal consistency (Cronbach's alpha: 0.84) and a good correlation with the Goldberg Anxiety Scale (CCI: 0.62 (95% CI: 0.55 to 0.68). The Spanish version of the APAIS is a valid and reliable preoperative anxiety measurement tool and shows psychometric properties similar to those obtained by similar previous studies.
Seward, Kirsty; Wolfenden, Luke; Wiggers, John; Finch, Meghan; Wyse, Rebecca; Oldmeadow, Christopher; Presseau, Justin; Clinton-McHarg, Tara; Yoong, Sze Lin
2017-04-04
While there are number of frameworks which focus on supporting the implementation of evidence based approaches, few psychometrically valid measures exist to assess constructs within these frameworks. This study aimed to develop and psychometrically assess a scale measuring each domain of the Theoretical Domains Framework for use in assessing the implementation of dietary guidelines within a non-health care setting (childcare services). A 75 item 14-domain Theoretical Domains Framework Questionnaire (TDFQ) was developed and administered via telephone interview to 202 centre based childcare service cooks who had a role in planning the service menu. Confirmatory factor analysis (CFA) was undertaken to assess the reliability, discriminant validity and goodness of fit of the 14-domain theoretical domain framework measure. For the CFA, five iterative processes of adjustment were undertaken where 14 items were removed, resulting in a final measure consisting of 14 domains and 61 items. For the final measure: the Chi-Square goodness of fit statistic was 3447.19; the Standardized Root Mean Square Residual (SRMR) was 0.070; the Root Mean Square Error of Approximation (RMSEA) was 0.072; and the Comparative Fit Index (CFI) had a value of 0.78. While only one of the three indices support goodness of fit of the measurement model tested, a 14-domain model with 61 items showed good discriminant validity and internally consistent items. Future research should aim to assess the psychometric properties of the developed TDFQ in other community-based settings.
Motulsky, Harvey J; Brown, Ronald E
2006-01-01
Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949
A simple method for the computation of first neighbour frequencies of DNAs from CD spectra
Marck, Christian; Guschlbauer, Wilhelm
1978-01-01
A procedure for the computation of the first neighbour frequencies of DNA's is presented. This procedure is based on the first neighbour approximation of Gray and Tinoco. We show that the knowledge of all the ten elementary CD signals attached to the ten double stranded first neighbour configurations is not necessary. One can obtain the ten frequencies of an unknown DNA with the use of eight elementary CD signals corresponding to eight linearly independent polymer sequences. These signals can be extracted very simply from any eight or more CD spectra of double stranded DNA's of known frequencies. The ten frequencies of a DNA are obtained by least square fit of its CD spectrum with these elementary signals. One advantage of this procedure is that it does not necessitate linear programming, it can be used with CD data digitalized using a large number of wavelengths, thus permitting an accurate resolution of the CD spectra. Under favorable case, the ten frequencies of a DNA (not used as input data) can be determined with an average absolute error < 2%. We have also observed that certain satellite DNA's, those of Drosophila virilis and Callinectes sapidus have CD spectra compatible with those of DNA's of quasi random sequence; these satellite DNA's should adopt also the B-form in solution. PMID:673843
Statistical analysis of particle trajectories in living cells
NASA Astrophysics Data System (ADS)
Briane, Vincent; Kervrann, Charles; Vimond, Myriam
2018-06-01
Recent advances in molecular biology and fluorescence microscopy imaging have made possible the inference of the dynamics of molecules in living cells. Such inference allows us to understand and determine the organization and function of the cell. The trajectories of particles (e.g., biomolecules) in living cells, computed with the help of object tracking methods, can be modeled with diffusion processes. Three types of diffusion are considered: (i) free diffusion, (ii) subdiffusion, and (iii) superdiffusion. The mean-square displacement (MSD) is generally used to discriminate the three types of particle dynamics. We propose here a nonparametric three-decision test as an alternative to the MSD method. The rejection of the null hypothesis, i.e., free diffusion, is accompanied by claims of the direction of the alternative (subdiffusion or superdiffusion). We study the asymptotic behavior of the test statistic under the null hypothesis and under parametric alternatives which are currently considered in the biophysics literature. In addition, we adapt the multiple-testing procedure of Benjamini and Hochberg to fit with the three-decision-test setting, in order to apply the test procedure to a collection of independent trajectories. The performance of our procedure is much better than the MSD method as confirmed by Monte Carlo experiments. The method is demonstrated on real data sets corresponding to protein dynamics observed in fluorescence microscopy.
A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line
NASA Technical Reports Server (NTRS)
Otoshi, Tom Y.
1994-01-01
A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
ERIC Educational Resources Information Center
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...
40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
40 CFR 90.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...
40 CFR 91.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...
40 CFR 90.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...
40 CFR 91.318 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...
40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...
40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
Noakes, Kimberley F.; Bissett, Ian P.; Pullan, Andrew J.; Cheng, Leo K.
2014-01-01
Three anatomically realistic meshes, suitable for finite element analysis, of the pelvic floor and anal canal regions have been developed to provide a framework with which to examine the mechanics, via finite element analysis of normal function within the pelvic floor. Two cadaver-based meshes were produced using the Visible Human Project (male and female) cryosection data sets, and a third mesh was produced based on MR image data from a live subject. The Visible Man (VM) mesh included 10 different pelvic structures while the Visible Woman and MRI meshes contained 14 and 13 structures respectively. Each image set was digitized and then finite element meshes were created using an iterative fitting procedure with smoothing constraints calculated from ‘L’-curves. These weights produced accurate geometric meshes of each pelvic structure with average Root Mean Square (RMS) fitting errors of less than 1.15 mm. The Visible Human cadaveric data provided high resolution images, however, the cadaveric meshes lacked the normal dynamic form of living tissue and suffered from artifacts related to postmortem changes. The lower resolution MRI mesh was able to accurately portray structure of the living subject and paves the way for dynamic, functional modeling. PMID:18317929
NASA Astrophysics Data System (ADS)
Tiwari, Sumit; Tiwari, G. N.
2018-06-01
In present research paper, semi-transparent photovoltaic module (SPVM) integrated greenhouse solar drying system has been used for grapes ( Vitis vinifera) drying. Based on hourly experimental information namely solar intensity, moisture evaporated, ambient air temperature, grape surface temperatures, relative humidity and greenhouse air temperature etc. heat and mass transfer coefficient for the SPVM drying system have been evaluated. It has been seen that the convective heat transfer coefficients for grapes found between 3.1-0.84 W/m2 K. Also, there is a fair agreement between theoretical and practical mass transfer (moisture evaporated) during drying of grapes with a correlation coefficient (r) and root mean square percentage deviation (e) of 0.88 and 11.56 respectively. Further, nonlinear regression procedure has been used to fit various drying models namely Henderson and Pabis model, Newton's model, and Page's model. From the analysis, it was found that Page's model is best fitted for grapes drying in SPV greenhouse as well as open sun drying. Further, net electrical energy, thermal energy and equivalent thermal energy were found to be 3.61, 17.66 and 27.15 kWh during six days of drying respectively.
Hofmann, Matthias J.; Koelsch, Patrick
2015-01-01
Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management.
Barbarella, Maurizio; D'Amico, Fabrizio; De Blasiis, Maria Rosaria; Di Benedetto, Alessandro; Fiani, Margherita
2017-12-26
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too.
Relation between the Surface Friction of Plates and their Statistical Microgeometry
1980-01-01
3-6 and -7. Calibration-- are taken for each of the Uicr~r unit exponent values and best fit li;nes by least squares fitted through each"n set of...parameter, [ = 1.de (2-43) (Clauser 1954, 1956). Data from near equilibrium flows (Coles & Hurst 1968) was plotted along with some typical non-equilibrium...too bad a fit even for the non equilibrium flows. Coles and Hurst (1968) recommended that the fit of the law of the wake to velocity profiles should be
Effect of noise on defect chaos in a reaction-diffusion model.
Wang, Hongli; Ouyang, Qi
2005-06-01
The influence of noise on defect chaos due to breakup of spiral waves through Doppler and Eckhaus instabilities is investigated numerically with a modified Fitzhugh-Nagumo model. By numerical simulations we show that the noise can drastically enhance the creation and annihilation rates of topological defects. The noise-free probability distribution function for defects in this model is found not to fit with the previously reported squared-Poisson distribution. Under the influence of noise, the distributions are flattened, and can fit with the squared-Poisson or the modified-Poisson distribution. The defect lifetime and diffusive property of defects under the influence of noise are also checked in this model.
Unsteady convection in tin in a Bridgman configuration
NASA Technical Reports Server (NTRS)
Knuteson, David J.; Fripp, Archibald L.; Woodell, Glenn A.; Debnam, William J., Jr.; Narayanan, Ranga
1991-01-01
When a quiescent fluid is heated sufficiently from below, steady convection will begin. Further heating will cause oscillatory and then turbulent flow. Theoretical results predict that the frequency of oscillation will depend on the square root of the Rayleigh number in the fluid. In the current work, liquid tin was heated from below for three aspect ratios, h/R = 3.4, 5.3, and 7.0. The experimental results are curve-fit for the square-root relation and also for a linear relation. The fit of the expression is evaluated using a correlation coefficient. An estimate for the first critical Rayleigh number (onset of steady convection) is obtained for both expressions. These values are compared to previous experimental results.
The Apollo 16 regolith - A petrographically-constrained chemical mixing model
NASA Technical Reports Server (NTRS)
Kempa, M. J.; Papike, J. J.; White, C.
1980-01-01
A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.
10 CFR 26.27 - Written policy and procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Program Elements § 26.27 Written policy and... respond to an emergency, the procedure must— (A) Require a determination of fitness by breath alcohol... require him or her to be subject to this subpart, if the results of the determination of fitness indicate...
10 CFR 26.27 - Written policy and procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Program Elements § 26.27 Written policy and... respond to an emergency, the procedure must— (A) Require a determination of fitness by breath alcohol... require him or her to be subject to this subpart, if the results of the determination of fitness indicate...
Making a georeferenced mosaic of historical map series using constrained polynomial fit
NASA Astrophysics Data System (ADS)
Molnár, G.
2009-04-01
Present day GIS software packages make it possible to handle several hundreds of rasterised map sheets. For proper usage of such datasets we usually have two requirements: First these map sheets should be georeferenced, secondly these georeferenced maps should fit properly together, without overlap and short. Both requirements can be fulfilled easily, if the geodetic background for the map series is accurate, and the projection of the map series is known. In this case the individual map sheets should be georeferenced in the projected coordinate system of the map series. This means every individual map sheets are georeferenced using overprinted coordinate grid or image corner projected coordinates as ground control points (GCPs). If after this georeferencing procedure the map sheets do not fit together (for example because of using different projection for every map sheet, as it is in the case of Third Military Survey) a common projection can be chosen, and all the georeferenced maps should be transformed to this common projection using a map-to-map transformation. If the geodetic background is not so strong, ie. there are distortions inside the map sheets, a polynomial (linear quadratic or cubic) polynomial fit can be used for georeferencing the map sheets. Finding identical surface objects (as GCPs) on the historical map and on a present day cartographic map, let us to determine a transformation between raw image coordinates (x,y) and the projected coordinates (Easting, Northing, E,N). This means, for all the map sheets, several GCPs should be found, (for linear, quadratic of cubic transformations at least 3, 5 or 10 respectively) and every map sheets should be transformed to a present day coordinate system individually using these GCPs. The disadvantage of this method is that, after the transformation, the individual transformed map sheets not necessarily fit together properly any more. To overcome this problem neither the reverse order of procedure helps: if we make the mosaic first (eg. graphically) and we try the polynomial fit of this mosaic afterwards, neither using this can we reduce the error of internal inaccuracy of the map-sheets. We can overcome this problem by calculating the transformation parameters of polynomial fit with constrains (Mikhail, 1976). The constrain is that the common edge of two neighboring map-sheets should be transformed identically, ie. the right edge of the left image and the left edge of the right image should fit together after the transformation. This condition should fulfill for all the internal (not only the vertical, but also for the horizontal) edges of the mosaic. Constrains are expressed as a relationship between parameters: The parameters of the polynomial transformation should fulfill not only the least squares adjustment criteria but also the constrain: the transformed coordinates should be identical on the image edges. (With the example mentioned above, for image points of the rightmost column of the left image the transformed coordinates should be the same a for the image points of the leftmost column of the right image, and these transformed coordinates can depend on the line number image coordinate of the raster point.) The normal equation system can be calculated with Lagrange-multipliers. The resulting set of parameters for all map-sheets should be applied on the transformation of the images. This parameter set can not been directly applied in GIS software for the transformation. The simplest solution applying this parameters is ‘simulating' GCPs for every image, and applying these simulated GCPs for the georeferencing of the individual map sheets. This method is applied on a set of map-sheets of the First military Survey of the Habsburg Empire with acceptable results. Reference: Mikhail, E. M.: Observations and Least Squares. IEP—A Dun-Donnelley Publisher, New York, 1976. 497 pp.
Use of inequality constrained least squares estimation in small area estimation
NASA Astrophysics Data System (ADS)
Abeygunawardana, R. A. B.; Wickremasinghe, W. N.
2017-05-01
Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
The self-transcendence scale: an investigation of the factor structure among nursing home patients.
Haugan, Gørill; Rannestad, Toril; Garåsen, Helge; Hammervold, Randi; Espnes, Geir Arild
2012-09-01
Self-transcendence, the ability to expand personal boundaries in multiple ways, has been found to provide well-being. The purpose of this study was to examine the dimensionality of the Norwegian version of the Self-Transcendence Scale, which comprises 15 items. Reed's empirical nursing theory of self-transcendence provided the theoretical framework; self-transcendence includes an interpersonal, intrapersonal, transpersonal, and temporal dimension. Cross-sectional data were obtained from a sample of 202 cognitively intact elderly patients in 44 Norwegian nursing homes. Exploratory factor analysis revealed two and four internally consistent dimensions of self-transcendence, explaining 35.3% (two factors) and 50.7% (four factors) of the variance, respectively. Confirmatory factor analysis indicated that the hypothesized two- and four-factor models fitted better than the one-factor model (cx (2), root mean square error of approximation, standardized root mean square residual, normed fit index, nonnormed fit index, comparative fit index, goodness-of-fit index, and adjusted goodness-of-fit index). The findings indicate self-transcendence as a multifactorial construct; at present, we conclude that the two-factor model might be the most accurate and reasonable measure of self-transcendence. This research generates insights in the application of the widely used Self-Transcendence Scale by investigating its psychometric properties by applying a confirmatory factor analysis. It also generates new research-questions on the associations between self-transcendence and well-being.
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2011 CFR
2011-07-01
... insulation. The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2012 CFR
2012-07-01
.... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2010 CFR
2010-07-01
.... The method is applicable to all cured board and blanket products. 2. Equipment One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for use in cutting... procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot...
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
De Beuckeleer, Liene I; Herrebout, Wouter A
2016-02-05
To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before. Copyright © 2015 Elsevier B.V. All rights reserved.
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
HOLEGAGE 1.0 - Strain-Gauge Drilling Analysis Program
NASA Technical Reports Server (NTRS)
Hampton, Roy V.
1992-01-01
Interior stresses inferred from changes in surface strains as hole is drilled. Computes stresses using strain data from each drilled-hole depth layer. Planar stresses computed in three ways: least-squares fit for linear variation with depth, integral method to give incremental stress data for each layer, and/or linear fit to integral data. Written in FORTRAN 77.
An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA
ERIC Educational Resources Information Center
Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo
2012-01-01
The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…
ERIC Educational Resources Information Center
Hubert, Lawrence; Arabie, Phipps; Meulman, Jacqueline
1998-01-01
Introduces a method for fitting order-constrained matrices that satisfy the strongly anti-Robinson restrictions (SAR). The method permits a representation of the fitted values in a (least-squares) SAR approximating matrix as lengths of paths in a graph. The approach is illustrated with a published proximity matrix. (SLD)
Deriving the Regression Equation without Using Calculus
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2004-01-01
Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…
Code of Federal Regulations, 2010 CFR
2010-10-01
... boundary in which the doors are fitted; (5) Door frames must be of rigid construction and provide at least... inches) square. A self-closing hinged or pivoted steel or equivalent material cover must be fitted in the...) A door in a bulkhead required to be A-60, A-30, or A-15 Class must be of hollow steel or equivalent...
Code of Federal Regulations, 2011 CFR
2011-10-01
... boundary in which the doors are fitted; (5) Door frames must be of rigid construction and provide at least... inches) square. A self-closing hinged or pivoted steel or equivalent material cover must be fitted in the...) A door in a bulkhead required to be A-60, A-30, or A-15 Class must be of hollow steel or equivalent...
Development and evaluation of social cognitive measures related to adolescent physical activity.
Dewar, Deborah L; Lubans, David Revalds; Morgan, Philip James; Plotnikoff, Ronald C
2013-05-01
This study aimed to develop and evaluate the construct validity and reliability of modernized social cognitive measures relating to physical activity behaviors in adolescents. An instrument was developed based on constructs from Bandura's Social Cognitive Theory and included the following scales: self-efficacy, situation (perceived physical environment), social support, behavioral strategies, and outcome expectations and expectancies. The questionnaire was administered in a sample of 171 adolescents (age = 13.6 ± 1.2 years, females = 61%). Confirmatory factor analysis was employed to examine model-fit for each scale using multiple indices, including chi-square index, comparative-fit index (CFI), goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA). Reliability properties were also examined (ICC and Cronbach's alpha). Each scale represented a statistically sound measure: fit indices indicated each model to be an adequate-to-exact fit to the data; internal consistency was acceptable to good (α = 0.63-0.79); rank order repeatability was strong (ICC = 0.82-0.91). Results support the validity and reliability of social cognitive scales relating to physical activity among adolescents. As such, the developed scales have utility for the identification of potential social cognitive correlates of youth physical activity, mediators of physical activity behavior changes and the testing of theoretical models based on Social Cognitive Theory.
NASA Astrophysics Data System (ADS)
Bainbridge, Matthew B.; Webb, John K.
2017-06-01
A new and automated method is presented for the analysis of high-resolution absorption spectra. Three established numerical methods are unified into one `artificial intelligence' process: a genetic algorithm (Genetic Voigt Profile FIT, gvpfit); non-linear least-squares with parameter constraints (vpfit); and Bayesian model averaging (BMA). The method has broad application but here we apply it specifically to the problem of measuring the fine structure constant at high redshift. For this we need objectivity and reproducibility. gvpfit is also motivated by the importance of obtaining a large statistical sample of measurements of Δα/α. Interactive analyses are both time consuming and complex and automation makes obtaining a large sample feasible. In contrast to previous methodologies, we use BMA to derive results using a large set of models and show that this procedure is more robust than a human picking a single preferred model since BMA avoids the systematic uncertainties associated with model choice. Numerical simulations provide stringent tests of the whole process and we show using both real and simulated spectra that the unified automated fitting procedure out-performs a human interactive analysis. The method should be invaluable in the context of future instrumentation like ESPRESSO on the VLT and indeed future ELTs. We apply the method to the zabs = 1.8389 absorber towards the zem = 2.145 quasar J110325-264515. The derived constraint of Δα/α = 3.3 ± 2.9 × 10-6 is consistent with no variation and also consistent with the tentative spatial variation reported in Webb et al. and King et al.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2013 CFR
2013-07-01
... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...
40 CFR Appendix C to Subpart Nnn... - Method for the Determination of Product Density
Code of Federal Regulations, 2014 CFR
2014-07-01
... One square foot (12 in. by 12 in.) template, or templates that are multiples of one square foot, for... to the plant's written procedure for the designated product. 3.2Cut samples using one square foot (or multiples of one square foot) template. 3.3Weigh product and obtain area weight (lb/ft2). 3.4Measure sample...
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.
1985-01-01
A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.
2009-09-01
For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.
Satorra, Albert; Neudecker, Heinz
2015-12-01
This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.
Pigeon visual short-term memory directly compared to primates.
Wright, Anthony A; Elmore, L Caitlin
2016-02-01
Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.
Benzeval, Ian; Bowyer, Adrian; Hubble, John
2012-01-01
The interactions of a number of commercially available dextran preparations with the lectin Concanavalin A (ConA) have been investigated. Dextrans over the molecular mass range 6 × 10³-2 × 10⁶ g mol⁻¹ were initially characterised in terms of their branching and hence terminal ligand density, using NMR. This showed a range of branching ratios between 3% and 5%, but no clear correlation with molecular mass. The bio-specific interaction of these materials with ConA was investigated using microcalorimetry. The data obtained were interpreted using a number of possible binding models reflecting the known structure of both dextran and the lectin. The results of this analysis suggest that the interaction is most appropriately described in terms of a two-site model. This offers the best compromise for the observed relationship between data and model predictions and the number of parameters used based on the chi-squared values obtained from a nonlinear least-squares fitting procedure. A two-site model is also supported by analysis of the respective sizes of the dextrans and the ConA tetramer. Using this model, the relationship between association constants, binding energy and molecular mass was determined. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Mooijaart, Ab; Satorra, Albert
2009-01-01
In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of…
The chipping headrig: A major invention of the 20th century
P. Koch
1973-01-01
A square peg won't fit in a round hole but a square timber can be chipped out of a round log. It's simple, fast and efficient, with a chipping headrig. Virtually every significant southern pine sawmill uses one of these amazing machines, busily making useful most of the wood in the tree, chipping some for paper and uncovering sawtimber where before there was...
ERIC Educational Resources Information Center
Li, Libo; Bentler, Peter M.
2011-01-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…
Gifted Homeschooling: Our Journey with a Square Peg. A Mother's Perspective
ERIC Educational Resources Information Center
Olmstead, Gwen
2015-01-01
The author shares that her journey with gifted homeschooling was filled with folly and a slow learning curve. By sharing some of the struggles and insights she faced, the author hopes others will benefit or find solace in knowing they are not alone when their square peg children do not fit into round holes. In this article the author discusses:…
Code of Federal Regulations, 2013 CFR
2013-10-01
... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2013 CFR
2013-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
Code of Federal Regulations, 2012 CFR
2012-10-01
... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2011 CFR
2011-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
Code of Federal Regulations, 2014 CFR
2014-10-01
... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2012 CFR
2012-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
Code of Federal Regulations, 2011 CFR
2011-10-01
... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...
Code of Federal Regulations, 2010 CFR
2010-10-01
... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
Does positivity mediate the relation of extraversion and neuroticism with subjective happiness?
Lauriola, Marco; Iani, Luca
2015-01-01
Recent theories suggest an important role of neuroticism, extraversion, attitudes, and global positive orientations as predictors of subjective happiness. We examined whether positivity mediates the hypothesized relations in a community sample of 504 adults between the ages of 20 and 60 years old (females = 50%). A model with significant paths from neuroticism to subjective happiness, from extraversion and neuroticism to positivity, and from positivity to subjective happiness fitted the data (Satorra-Bentler scaled chi-square (38) = 105.91; Comparative Fit Index = .96; Non-Normed Fit Index = .95; Root Mean Square Error of Approximation = .060; 90% confidence interval = .046, .073). The percentage of subjective happiness variance accounted for by personality traits was only about 48%, whereas adding positivity as a mediating factor increased the explained amount of subjective happiness to 78%. The mediation model was invariant by age and gender. The results show that the effect of extraversion on happiness was fully mediated by positivity, whereas the effect of neuroticism was only partially mediated. Implications for happiness studies are also discussed.
Magis, David; Beland, Sebastien; Raiche, Gilles
2014-01-01
The Infit mean square W and the Outfit mean square U are commonly used person fit indexes under Rasch measurement. However, they suffer from two major weaknesses. First, their asymptotic distribution is usually derived by assuming that the true ability levels are known. Second, such distributions are even not clearly stated for indexes U and W. Both issues can seriously affect the selection of an appropriate cut-score for person fit identification. Snijders (2001) proposed a general approach to correct some person fit indexes when specific ability estimators are used. The purpose of this paper is to adapt this approach to U and W indexes. First, a brief sketch of the methodology and its application to U and W is proposed. Then, the corrected indexes are compared to their classical versions through a simulation study. The suggested correction yields controlled Type I errors against both conservatism and inflation, while the power to detect specific misfitting response patterns gets significantly increased.
NASA Astrophysics Data System (ADS)
Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.
2018-01-01
Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
Anomalous Structural Disorder in Supported Pt Nanoparticles
Vila, Fernando D.; Rehr, John J.; Nuzzo, Ralph G.; ...
2017-07-02
Supported Pt nanocatalysts generally exhibit anomalous behavior, including negative thermal expansion and large structural disorder. Finite temperature DFT/MD simulations reproduce these properties, showing that they are largely explained by a combination of thermal vibrations and low-frequency disorder. We show in this paper that a full interpretation is more complex and that the DFT/MD mean-square relative displacements (MSRD) can be further separated into vibrational disorder, “dynamic structural disorder” (DSD), and long-time equilibrium fluctuations of the structure dubbed “anomalous structural disorder” (ASD). We find that the vibrational and DSD components behave normally, increasing linearly with temperature while the ASD decreases, reflecting themore » evolution of mean nanoparticle geometry. Finally, as a consequence the usual procedure of fitting the MSRD to normal vibrations plus temperature-independent static disorder results in unphysical bond strengths and Grüneisen parameters.« less
Stream-temperature characteristics in Georgia
Dyar, T.R.; Alhadeff, S. Jack
1997-01-01
Stream-temperature measurements for 198 periodic and 22 daily record stations were analyzed using a harmonic curve-fitting procedure. Statistics of data from 78 selected stations were used to compute a statewide stream-temperature harmonic equation, derived using latitude, drainage area, and altitude for natural streams having drainage areas greater than about 40 square miles. Based on the 1955-84 reference period, the equation may be used to compute long-term natural harmonic stream-temperature coefficients to within an on average of about 0.4? C. Basin-by-basin summaries of observed long-term stream-temperature characteristics are included for selected stations and river reaches, particularly along Georgia's mainstem streams. Changes in the stream- temperature regimen caused by the effects of development, principally impoundments and thermal power plants, are shown by comparing harmonic curves and coefficients from the estimated natural values to the observed modified-condition values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessa, L. L.; Martins, A. S.; Fellows, C. E., E-mail: fellows@if.uff.br
2015-10-28
In this note, three vibrational bands of the electronic transition A{sup 2}Σ{sup +}-X{sup 2}Π of the N{sub 2}O{sup +} radical (000-100, 100-100, and 001-101) were theoretically analysed. Starting from Hamiltonian models proposed for this kind of molecule, their parameters were calculated using a Levenberg-Marquardt fit procedure in order to reduce the root mean square deviation from the experimental transitions below to 0.01 cm{sup −1}. The main objective of this work is to obtain new and reliable values for rotational constant B″ and the spin-orbit interaction parameter A of the analysed vibrational levels of the X{sup 2}Π electronic state of thismore » molecule.« less
NASA Astrophysics Data System (ADS)
Harris, V. G.; Oliver, S. A.; Ayers, J. D.; Das, B. N.; Koon, N. C.
1996-04-01
The evolution of the local atomic environment around Fe atoms in very thin (15 nm), amorphous, partially crystallized and fully crystallized films of Fe80B20 was studied using extended x-ray absorption fine structure (EXAFS) measurements. The relative atomic fraction of each crystalline phase present in the annealed samples was extracted from the Fe EXAFS data by a least-squares fitting procedure, using data collected from t-Fe3B, t-Fe2B, and α-Fe standards. The type and relative fraction of the crystallization products follows the trends previously measured in Fe80B20 melt-spun ribbons, except for the fact that crystallization temperatures are ≊200 K lower than those measured in bulk equivalents. This greatly reduced crystallization temperature may arise from the dominant role of surface nucleation sites in the crystallization of very thin amorphous films.
NASA Astrophysics Data System (ADS)
Tanaka, Masaomi; Fukuda, Mitsunori; Nishimura, Daiki; Suzuki, Shinji; Takechi, Maya; Mihara, Mototsugu; Matsuta, Kensaku; Morita, Yusuke; Kamisho, Yasuto; Ohno, Junichi; Kanbe, Ryosuke; Yamaoka, Shintaro; Watanabe, Kota; Ohtsubo, Takashi; Izumikawa, Takuji; Nagashima, Masayuki; Honma, Akira; Murooka, Daiki; Suzuki, Takashi; Yamaguchi, Takayuki; Kohno, Junpei; Yamaki, Sayaka; Matsunaga, Satoshi; Kinno, Shunpei; Taguchi, Yoshimasa; Kitagawa, Atsushi; Fukuda, Shigekazu; Sato, Shinji
We utilized the proton-neutron asymmetry of nucleon-nucleon total cross sections in the intermediate energy region (σ pn ne σ pp( nn )) to obtain the information of proton and neutron distributions respectively. We have measured reaction cross sections (σR) for 14B and 8He on proton targets as isospin asymmetric targets in addition to symmetric ones. Proton and neutron density distributions were derived respectively through the χ2-fitting procedure with the modified Glauber calculation. The result suggests a necessity for 14B of a long tail, and also a necessity for 8He of a neutron tail. Root-mean-square proton, neutron and matter radii for 14B and 8He are also derived. Each radius is consistent with some of the other experimental values and also with some of the several theoretical values.
NASA Technical Reports Server (NTRS)
Arcella, F. G.
1974-01-01
Arc cast W, CVD, W, CVD Re, and powder metallurgy Re materials were hot isostatically pressure welded to ten different refractory metals and alloys and thermally aged at 10 to the minus 8th power torr at 1200 C, 1500 C, 1630 C, 1800 C, and 2000 C for 100 hours to 2000 hours. Electron beam microprobe analysis was used to characterize the interdiffusion zone width of each couple system as a function of age time and temperature. Each system was least squares fitted to the equation: In (delta X sq/t) = B/T + A, where delta X is net interdiffusion zone width, t is age time, and T is age temperature. Detailed descriptions of experimental and analytical procedures utilized in conducting the experimental program are provided. For Vol. 1, see N74-34046.
Warren, Sean C; Kim, Youngchan; Stone, James M; Mitchell, Claire; Knight, Jonathan C; Neil, Mark A A; Paterson, Carl; French, Paul M W; Dunsby, Chris
2016-09-19
This paper demonstrates multiphoton excited fluorescence imaging through a polarisation maintaining multicore fiber (PM-MCF) while the fiber is dynamically deformed using all-proximal detection. Single-shot proximal measurement of the relative optical path lengths of all the cores of the PM-MCF in double pass is achieved using a Mach-Zehnder interferometer read out by a scientific CMOS camera operating at 416 Hz. A non-linear least squares fitting procedure is then employed to determine the deformation-induced lateral shift of the excitation spot at the distal tip of the PM-MCF. An experimental validation of this approach is presented that compares the proximally measured deformation-induced lateral shift in focal spot position to an independent distally measured ground truth. The proximal measurement of deformation-induced shift in focal spot position is applied to correct for deformation-induced shifts in focal spot position during raster-scanning multiphoton excited fluorescence imaging.
Modelling lifetime data with multivariate Tweedie distribution
NASA Astrophysics Data System (ADS)
Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah
2017-05-01
This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2009-06-01
Spectroscopists have long attempted to summarize what they know about small molecules in terms of a knowledge of potential energy curves or surfaces. For most of the past century, this involved deducing polynomial-expansion force-field coefficients from energy level expressions fitted to experimental data, or for diatomic molecules, by generating tables of many-digit RKR turning points from such expressions. In recent years, however, it has become increasingly common either to use high-level ab initio calculations to compute the desired potentials, or to determine parametrized global analytic potential functions from direct fits to spectroscopic data. In the former case, this invoked a need for robust, flexible, compact, and `portable' analytic potentials for summarizing the information contained in the (sometimes very large numbers of) ab initio points, and making them `user friendly'. In the latter case, the same properties are required for potentials used in the least-squares fitting procedure. In both cases, there is also a cardinal need for potential function forms that extrapolate sensibly, beyond the range of the experimental data or ab initio points. This talk will describe some recent developments in this area, and make a case for what is arguably the `best' general-purpose analytic potential function form now available. Applications to both diatomic molecules and simple polyatomic molecules will be discussed. footnote
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-09-01
Nmrfit reads the output from a nuclear magnetic resonance (NMR) experiment and, through a number of intuitive API calls, produces a least-squares fit of Voigt-function approximations via particle swarm optimization.
Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C
2018-02-01
The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.
An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.
2014-01-01
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Uncertainty propagation in the calibration equations for NTC thermistors
NASA Astrophysics Data System (ADS)
Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen
2018-06-01
The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Achievable accuracy of hip screw holding power estimation by insertion torque measurement.
Erani, Paolo; Baleani, Massimiliano
2018-02-01
To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Petrowski, Katja; Kliem, Sören; Sadler, Michael; Meuret, Alicia E; Ritz, Thomas; Brähler, Elmar
2018-02-06
Demands placed on individuals in occupational and social settings, as well as imbalances in personal traits and resources, can lead to chronic stress. The Trier Inventory for Chronic Stress (TICS) measures chronic stress while incorporating domain-specific aspects, and has been found to be a highly reliable and valid research tool. The aims of the present study were to confirm the German version TICS factorial structure in an English translation of the instrument (TICS-E) and to report its psychometric properties. A random route sample of healthy participants (N = 483) aged 18-30 years completed the TICS-E. The robust maximum likelihood estimation with a mean-adjusted chi-square test statistic was applied due to the sample's significant deviation from the multivariate normal distribution. Goodness of fit, absolute model fit, and relative model fit were assessed by means of the root mean square error of approximation (RMSEA), the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). Reliability estimates (Cronbach's α and adjusted split-half reliability) ranged from .84 to .92. Item-scale correlations ranged from .50 to .85. Measures of fit showed values of .052 for RMSEA (Cl = 0.50-.054) and .067 for SRMR for absolute model fit, and values of .846 (TLI) and .855 (CFI) for relative model-fit. Factor loadings ranged from .55 to .91. The psychometric properties and factor structure of the TICS-E are comparable to the German version of the TICS. The instrument therefore meets quality standards for an adequate measurement of chronic stress.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.
2018-04-01
Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.
NASA Technical Reports Server (NTRS)
Lanyi, Gabor E.; Roth, Titus
1988-01-01
Total ionospheric electron contents (TEC) were measured by global positioning system (GPS) dual-frequency receivers developed by the Jet Propulsion Laboratory. The measurements included P-code (precise ranging code) and carrier phase data for six GPS satellites during multiple five-hour observing sessions. A set of these GPS TEC measurements were mapped from the GPS lines of sight to the line of sight of a Faraday beacon satellite by statistically fitting the TEC data to a simple model of the ionosphere. The mapped GPS TEC values were compared with the Faraday rotation measurements. Because GPS transmitter offsets are different for each satellite and because some GPS receiver offsets were uncalibrated, the sums of the satellite and receiver offsets were estimated simultaneously with the TEC in a least squares procedure. The accuracy of this estimation procedure is evaluated indicating that the error of the GPS-determined line of sight TEC can be at or below 1 x 10 to the 16th el/sq cm. Consequently, the current level of accuracy is comparable to the Faraday rotation technique; however, GPS provides superior sky coverage.
Violato, Claudio; Gao, Hong; O'Brien, Mary Claire; Grier, David; Shen, E
2018-05-01
The distinction between basic sciences and clinical knowledge which has led to a theoretical debate on how medical expertise is developed has implications for medical school and lifelong medical education. This longitudinal, population based observational study was conducted to test the fit of three theories-knowledge encapsulation, independent influence, distinct domains-of the development of medical expertise employing structural equation modelling. Data were collected from 548 physicians (292 men-53.3%; 256 women-46.7%; mean age = 24.2 years on admission) who had graduated from medical school 2009-2014. They included (1) Admissions data of undergraduate grade point average and Medical College Admission Test sub-test scores, (2) Course performance data from years 1, 2, and 3 of medical school, and (3) Performance on the NBME exams (i.e., Step 1, Step 2 CK, and Step 3). Statistical fit indices (Goodness of Fit Index-GFI; standardized root mean squared residual-SRMR; root mean squared error of approximation-RSMEA) and comparative fit [Formula: see text] of three theories of cognitive development of medical expertise were used to assess model fit. There is support for the knowledge encapsulation three factor model of clinical competency (GFI = 0.973, SRMR = 0.043, RSMEA = 0.063) which had superior fit indices to both the independent influence and distinct domains theories ([Formula: see text] vs [Formula: see text] [[Formula: see text
A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction
NASA Astrophysics Data System (ADS)
Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.
2017-03-01
There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.
Application of recursive approaches to differential orbit correction of near Earth asteroids
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2016-10-01
Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.
NASA Astrophysics Data System (ADS)
Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin
2014-06-01
This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
ERIC Educational Resources Information Center
Knol, Dirk L.; ten Berge, Jos M. F.
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…
Nutritional Status of Rural Older Adults Is Linked to Physical and Emotional Health.
Jung, Seung Eun; Bishop, Alex J; Kim, Minjung; Hermann, Janice; Kim, Giyeon; Lawrence, Jeannine
2017-06-01
Although nutritional status is influenced by multidimensional aspects encompassing physical and emotional well-being, there is limited research on this complex relationship. The purpose of this study was to examine the interplay between indicators of physical health (perceived health status and self-care capacity) and emotional well-being (depressive affect and loneliness) on rural older adults' nutritional status. The cross-sectional study was conducted from June 1, 2007, to June 1, 2008. A total of 171 community-dwelling older adults, aged 65 years and older, residing within nonmetro rural communities in the United States participated in this study. Participants completed validated instruments measuring self-care capacity, perceived health status, loneliness, depressive affect, and nutritional status. Structural equation modeling was employed to investigate the complex interplay of physical and emotional health status with nutritional status among rural older adults. The χ 2 test, comparative fit index, root mean square error of approximation, and standardized root mean square residual were used to assess model fit. The χ 2 test and the other model fit indexes showed the hypothesized structural equation model provided a good fit to the data (χ 2 (2)=2.15; P=0.34; comparative fit index=1.00; root mean square error of approximation=0.02; and standardized root mean square residual=0.03). Self-care capacity was significantly related with depressive affect (γ=-0.11; P=0.03), whereas self-care capacity was not significantly related with loneliness. Perceived health status had a significant negative relationship with both loneliness (γ=-0.16; P=0.03) and depressive affect (γ=-0.22; P=0.03). Although loneliness showed no significant direct relationship with nutritional status, it showed a significant direct relationship with depressive affect (β=.4; P<0.01). Finally, the results demonstrated that depressive affect had a significant negative relationship with nutritional status (β=-.30; P<0.01). The results indicated physical health and emotional indicators have significant multidimensional associations with nutritional status among rural older adults. The present study provides insights into the importance of addressing both physical and emotional well-being together to reduce potential effects of poor emotional well-being on nutritional status, particularly among rural older adults with impaired physical health and self-care capacity. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Yu, D; Beitler, J
Purpose: Xerostomia (dry mouth), secondary to parotid-gland injury, is a distressing side-effect in head-and-neck radiotherapy (RT). This study's purpose is to develop a novel ultrasound technique to quantitatively evaluate post-RT parotid-gland injury. Methods: Recent ultrasound studies have shown that healthy parotid glands exhibit homogeneous echotexture, whereas post-RT parotid glands are often heterogeneous, with multiple hypoechoic (inflammation) or hyperechoic (fibrosis) regions. We propose to use a Gaussian mixture model to analyze the ultrasonic echo-histogram of the parotid glands. An IRB-approved clinical study was conducted: (1) control-group: 13 healthy-volunteers, served as the control; (2) acutetoxicity group − 20 patients (mean age: 62.5more » ± 8.9 years, follow-up: 2.0±0.8 months); and (3) late-toxicity group − 18 patients (mean age: 60.7 ± 7.3 years, follow-up: 20.1±10.4 months). All patients experienced RTOG grade 1 or 2 salivary-gland toxicity. Each participant underwent an ultrasound scan (10 MHz) of the bilateral parotid glands. An echo-intensity histogram was derived for each parotid and a Gaussian mixture model was used to fit the histogram using expectation maximization (EM) algorithm. The quality of the fitting was evaluated with the R-squared value. Results: (1) Controlgroup: all parotid glands fitted well with one Gaussian component, with a mean intensity of 79.8±4.9 (R-squared>0.96). (2) Acute-toxicity group: 37 of the 40 post-RT parotid glands fitted well with two Gaussian components, with a mean intensity of 42.9±7.4, 73.3±12.2 (R-squared>0.95). (3) Latetoxicity group: 32 of the 36 post-RT parotid fitted well with 3 Gaussian components, with mean intensities of 49.7±7.6, 77.2±8.7, and 118.6±11.8 (R-squared>0.98). Conclusion: RT-associated parotid-gland injury is common in head-and-neck RT, but challenging to assess. This work has demonstrated that the Gaussian mixture model of the echo-histogram could quantify acute and late toxicity of the parotid glands. This study provides meaningful preliminary data from future observational and interventional clinical research.« less
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Liu, Zifeng; Yuan, Lianxiong; Huang, Yixiang; Zhang, Lingling; Luo, Futian
2016-01-01
Objective We aimed to develop a questionnaire for quantitative evaluation of the autonomy of public hospitals in China. Method An extensive literature review was conducted to select possible items for inclusion in the questionnaire, which was then reviewed by 5 experts. After a two-round Delphi method, we distributed the questionnaire to 404 secondary and tertiary hospitals in Guangdong Province, China, and 379 completed questionnaires were collected. The final questionnaire was then developed on the basis of the results of exploratory and confirmatory factor analysis. Results Analysis suggested that all internal consistency reliabilities exceeded the minimum reliability standard of 0.70 for the α coefficient. The overall scale coefficient was 0.87, and 6 subscale coefficients were 0.92 (strategic management), 0.81 (budget and expenditure), 0.85 (financing), 0.75 (financing, medical management), 0.86 (human resources) and 0.86 (accountability). Correlation coefficients between and among items and their hypothesised subscales were higher than those with other subscales. The value of average variance extracted (AVE) was higher than 0.5, the value of construct reliability (CR) was higher than 0.7, and the square roots of the AVE of each subscale were larger than the correlation of the specific subscale with the other subscales, supporting the convergent and discriminant validity of the Chinese version of the Hospital Autonomy Questionnaire (CVHAQ). The model fit indices were all acceptable: χ2/df=1.73, Goodness of Fit Index (GFI) = 0.93, Adjusted Goodness of Fit Index (AGFI) = 0.91, Non-Normed Fit Index (NNFI) = 0.96, Comparative Fit Index (CFI) = 0.97, Root Mean Square Error of Approximation (RMSEA) = 0.04, Standardised Root Mean Square Residual (SRMR) = 0.07. Conclusions This study demonstrated the reliability and validity of a CVHAQ and provides a quantitative method for the assessment of hospital autonomy. PMID:26911587
The Effect of a History-Fitness Updating Rule on Evolutionary Games
NASA Astrophysics Data System (ADS)
Du, Wen-Bo; Cao, Xian-Bin; Liu, Run-Ran; Jia, Chun-Xiao
In this paper, we introduce a history-fitness-based updating rule into the evolutionary prisoner's dilemma game (PDG) on square lattices, and study how it works on the evolution of cooperation level. Under this updating rule, the player i will firstly select player j from its direct neighbors at random and then compare their fitness which is determined by the current payoff and history fitness. If player i's fitness is larger than that of j, player i will be more likely to keep its own strategy. Numerical results show that the cooperation level is remarkably promoted by the history-fitness-based updating rule. Moreover, there exists a moderate mixing proportion of current payoff and history fitness that can induce the optimal fitness, where the highest cooperation level is obtained. Our work may shed some new light on the ubiquitous cooperative behaviors in nature and society induced by the history factor.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
McHugh Power, Joanna; Carney, Sile; Hannigan, Caoimhe; Brennan, Sabina; Wolfe, Hannah; Lynch, Marina; Kee, Frank; Lawlor, Brian
2016-11-01
Potential associations between systemic inflammation and social support received by a sample of 120 older adults were examined here. Inflammatory markers, cognitive function, social support and psychosocial wellbeing were evaluated. A structural equation modelling approach was used to analyse the data. The model was a good fit [Formula: see text], p < 0.001; comparative fit index = 0.973; Tucker-Lewis Index = 0.962; root mean square error of approximation = 0.021; standardised root mean-square residual = 0.074). Chemokine levels were associated with increased age ( β = 0.276), receipt of less social support from friends ( β = -0.256) and body mass index ( β = -0.256). Results are discussed in relation to social signal transduction theory.
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
ERIC Educational Resources Information Center
Roberts, James S.
Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…
Macro-microscopic mass formulae and nuclear mass predictions
NASA Astrophysics Data System (ADS)
Royer, G.; Guilbaud, M.; Onillon, A.
2010-12-01
Different mass formulae derived from the liquid drop model and the pairing and shell energies of the Thomas-Fermi model have been studied and compared. They include or not the diffuseness correction to the Coulomb energy, the charge exchange correction term, the curvature energy, different forms of the Wigner term and powers of the relative neutron excess I=(N-Z)/A. Their coefficients have been determined by a least square fitting procedure to 2027 experimental atomic masses (G. Audi et al. (2003) [1]). The Coulomb diffuseness correction Z/A term or the charge exchange correction Z/A term plays the main role to improve the accuracy of the mass formula. The Wigner term and the curvature energy can also be used separately but their coefficients are very unstable. The different fits lead to a surface energy coefficient of around 17-18 MeV. A large equivalent rms radius ( r=1.22-1.24 fm) or a shorter central radius may be used. An rms deviation of 0.54 MeV can be reached between the experimental and theoretical masses. The remaining differences come probably mainly from the determination of the shell and pairing energies. Mass predictions of selected expressions have been compared to 161 new experimental masses and the correct agreement allows to provide extrapolations to masses of 656 selected exotic nuclei.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boersma, C.; Mattioda, A. L.; Allamandola, L. J.
A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysismore » called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.« less
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management
Di Benedetto, Alessandro; Fiani, Margherita
2017-01-01
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too. PMID:29278386
Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada
NASA Technical Reports Server (NTRS)
Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea
1992-01-01
Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.
NASA Astrophysics Data System (ADS)
Lockwood, M.; Owens, M. J.; Barnard, L.; Usoskin, I. G.
2016-11-01
We use sunspot-group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups [RB] above a variable cut-off threshold of observed total whole spot area (uncorrected for foreshortening) to simulate what a lower-acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number [RA] using a variety of regression techniques. It is found that a very high correlation between RA and RB (r_{AB} > 0.98) does not prevent large errors in the intercalibration (for example sunspot-maximum values can be over 30 % too large even for such levels of r_{AB}). In generating the backbone sunspot number [R_{BB}], Svalgaard and Schatten ( Solar Phys., 2016) force regression fits to pass through the scatter-plot origin, which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot-cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile ("Q-Q") plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least-squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot-group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar-cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.
ERIC Educational Resources Information Center
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…
[Development of a Questionnaire Measuring Sexual Mental Health of Tibetan University Students].
Chen, Jun-cheng; Yan, Yu-ruo; Ai, Li; Guo, Xue-hua; He, Jian-xiu; Yuan, Ping
2016-05-01
To develop a questionnaire measuring sexual mental health of Tibetan university students. A draft questionnaire was developed with reference to the Sexual Civilization Survey for University Students of New Century and other published literature, and in consultation with experts. The questionnaire was tested in 230 students. Exploratory factor analyses with principal component and varimax orthogonal rotation were performed. Common factors with a > 1 eigenvalues and ≥ 3 loaded items (factor loading ≥ 0.4) were retained. Items with a < 0.4 factor loading, < 0.2 commonality, or falling into a common factor with < 3 items were excluded. The revised questionnaire was administered in another sample of 481 university students. Cronbach's α and split-half reliabilities were estimated. Confirmatory factor analyses were performed to test the construct validity of the questionnaire. Four rounds of exploratory factor analyses reduced the draft questionnaire items from 39 to 34 with a 7-factor structure. The questionnaire had a Cronbach's α of 0.920, 0.898, 0.812, 0.844, 0.787, 0.684, 0.703, and 0.608, and a Spearman-Brown coefficient of 0.763, 0.867, 0.742, 0838, 0.746, 0.822, 0.677, and 0.564 for the overall questionnaire and its 7 domains, respectively, suggesting good internal reliability. The structural equation of confirmatory factor analysis fitted well with the raw data: fit index χ²/df 3.736; root mean square residual (RMR) 0.081; root mean square error of approximation (RMSEA = 0.076; goodness of fit index (GFI) 0.805; adjusted goodness of fit index (AGFI) 0.770; normed fit index (NFI) = 0.774; relative fit index (RFI) 0.749; incremental fit index (IFI) 0.824; non-normed fit index (NNFI) = 0.803; comparative fit index (CFI) = 0.823; parsimony goodness of fit index (PGFI) = 0.684; parsimony normed fit index (PNFI) = 0.698; parsimony comparative fit index (PCFI) = 0.742, suggesting good construct validity of the questionnaire. The Sexual Mental Health Questionnaire for Tibetan University Student has demonstrated good reliability and validity.
From direct-space discrepancy functions to crystallographic least squares.
Giacovazzo, Carmelo
2015-01-01
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Castine Report S-15 Project: Shipbuilding Standards
1976-01-01
Fixed Square Windows Ships” Extruded Aluminium Alloy Square Windows “ Ships” Foot Steps Ships* Wooden Hand Rail . Pilot Ladders Panama Canal Pilot...Platforms Aluminium Alloy Accommodation Ladders Mouth Pieces for Voice Tube Chain Drwe Type Telegraphs Fittings for Steam Whistle Llfeboats Radial Type...Cast Steel Angle Valves for Compressed Air F 8001-1957 F 8002-1967 F 8003.1975 F 8004.1975 F 8011 1966 F 8013.1969 F 8101.1969 F 8401.1970 F
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
Bagheri Hosseinabadi, Majid; Etemadinezhad, Siavash; Khanjani, Narges; Ahmadi, Omran; Gholinia, Hemat; Galeshi, Mina; Samaei, Seyed Ehsan
2018-01-01
Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran. Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals. Respondents completed the Minnesota Satisfaction Questionnaire (MSQ), the health and safety executive (HSE) indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM) analyses were carried out applying SPSS v. 22 and AMOS v. 22. Results: The Normed Fit Index (NFI), Non-normed Fit Index (NNFI), Incremental Fit Index (IFI)and Comparative Fit Index (CFI) were greater than 0.9. Also, goodness of fit index (GFI=0.99)and adjusted goodness of fit index (AGFI) were greater than 0.8, and root mean square error of approximation (RMSEA) were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001), control (β =0.135, CI =0.062 - 0.404, P =0.008), relationships(β =-0.208, CI =-0.637- -0.209; P≤0.001) and changes (β =0.247, CI =0.360 - 1.026, P≤0.001)with job satisfaction. Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended.
Bagheri Hosseinabadi, Majid; Etemadinezhad, Siavash; khanjani, Narges; Ahmadi, Omran; Gholinia, Hemat; Galeshi, Mina; Samaei, Seyed Ehsan
2018-01-01
Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran. Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals. Respondents completed the Minnesota Satisfaction Questionnaire (MSQ), the health and safety executive (HSE) indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM) analyses were carried out applying SPSS v. 22 and AMOS v. 22. Results: The Normed Fit Index (NFI), Non-normed Fit Index (NNFI), Incremental Fit Index (IFI)and Comparative Fit Index (CFI) were greater than 0.9. Also, goodness of fit index (GFI=0.99)and adjusted goodness of fit index (AGFI) were greater than 0.8, and root mean square error of approximation (RMSEA) were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001), control (β =0.135, CI =0.062 - 0.404, P =0.008), relationships(β =-0.208, CI =-0.637– -0.209; P≤0.001) and changes (β =0.247, CI =0.360 - 1.026, P≤0.001)with job satisfaction. Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended. PMID:29744305
Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F
2003-03-20
Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.
Mathcad in the Chemistry Curriculum Symbolic Software in the Chemistry Curriculum
NASA Astrophysics Data System (ADS)
Zielinski, Theresa Julia
2000-05-01
Physical chemistry is such a broad discipline that the topics we expect average students to complete in two semesters usually exceed their ability for meaningful learning. Consequently, the number and kind of topics and the efficiency with which students can learn them are important concerns. What topics are essential and what can we do to provide efficient and effective access to those topics? How do we accommodate the fact that students come to upper-division chemistry courses with a variety of nonuniformly distributed skills, a bit of calculus, and some physics studied one or more years before physical chemistry? The critical balance between depth and breadth of learning in courses and curricula may be achieved through appropriate use of technology and especially through the use of symbolic mathematics software. Software programs such as Mathcad, Mathematica, and Maple, however, have learning curves that diminish their effectiveness for novices. There are several ways to address the learning curve conundrum. First, basic instruction in the software provided during laboratory sessions should be followed by requiring laboratory reports that use the software. Second, one should assign weekly homework that requires the software and builds student skills within the discipline and with the software. Third, a complementary method, supported by this column, is to provide students with Mathcad worksheets or templates that focus on one set of related concepts and incorporate a variety of features of the software that they are to use to learn chemistry. In this column we focus on two significant topics for young chemists. The first is curve-fitting and the statistical analysis of the fitting parameters. The second is the analysis of the rotation/vibration spectrum of a diatomic molecule, HCl. A broad spectrum of Mathcad documents exists for teaching chemistry. One collection of 50 documents can be found at http://www.monmouth.edu/~tzielins/mathcad/Lists/index.htm. Another collection of peer-reviewed documents is developing through this column at the JCE Internet Web site, http://jchemed.chem.wisc.edu/JCEWWW/Features/ McadInChem/index.html. With this column we add three peer-reviewed and tested Mathcad documents to the JCE site. In Linear Least-Squares Regression, Sidney H. Young and Andrzej Wierzbicki demonstrate various implicit and explicit methods for determining the slope and intercept of the regression line for experimental data. The document shows how to determine the standard deviation for the slope, the intercept, and the standard deviation of the overall fit. Students are next given the opportunity to examine the confidence level for the fit through the Student's t-test. Examination of the residuals of the fit leads students to explore the possibility of rejecting points in a set of data. The document concludes with a discussion of and practice with adding a quadratic term to create a polynomial fit to a set of data and how to determine if the quadratic term is statistically significant. There is full documentation of the various steps used throughout the exposition of the statistical concepts. Although the statistical methods presented in this worksheet are generally accessible to average physical chemistry students, an instructor would be needed to explain the finer points of the matrix methods used in some sections of the worksheet. The worksheet is accompanied by a set of data for students to use to practice the techniques presented. It would be worthwhile for students to spend one or two laboratory periods learning to use the concepts presented and then to apply them to experimental data they have collected for themselves. Any linear or linearizable data set would be appropriate for use with this Mathcad worksheet. Alternatively, instructors may select sections of the document suited to the skill level of their students and the laboratory tasks at hand. In a second Mathcad document, Non-Linear Least-Squares Regression, Young and Wierzbicki introduce the basic concepts of nonlinear curve-fitting and develop the techniques needed to fit a variety of mathematical functions to experimental data. This approach is especially important when mathematical models for chemical processes cannot be linearized. In Mathcad the Levenberg-Marquardt algorithm is used to determine the best fitting parameters for a particular mathematical model. As in linear least-squares, the goal of the fitting process is to find the values for the fitting parameters that minimize the sum of the squares of the deviations between the data and the mathematical model. Students are asked to determine the fitting parameters, use the Hessian matrix to compute the standard deviation of the fitting parameters, test for the significance of the parameters using Student's t-test, use residual analysis to test for data points to remove, and repeat the calculations for another set of data. The nonlinear least-squares procedure follows closely on the pattern set up for linear least-squares by the same authors (see above). If students master the linear least-squares worksheet content they will be able to master the nonlinear least-squares technique (see also refs 1, 2). In the third document, The Analysis of the Vibrational Spectrum of a Linear Molecule by Richard Schwenz, William Polik, and Sidney Young, the authors build on the concepts presented in the curve fitting worksheets described above. This vibrational analysis document, which supports a classic experiment performed in the physical chemistry laboratory, shows how a Mathcad worksheet can increase the efficiency by which a set of complicated manipulations for data reduction can be made more accessible for students. The increase in efficiency frees up time for students to develop a fuller understanding of the physical chemistry concepts important to the interpretation of spectra and understanding of bond vibrations in general. The analysis of the vibration/rotation spectrum for a linear molecule worksheet builds on the rich literature for this topic (3). Before analyzing their own spectral data, students practice and learn the concepts and methods of the HCl spectral analysis by using the fundamental and first harmonic vibrational frequencies provided by the authors. This approach has a fundamental pedagogical advantage. Most explanations in laboratory texts are very concise and lack mathematical details required by average students. This Mathcad worksheet acts as a tutor; it guides students through the essential concepts for data reduction and lets them focus on learning important spectroscopic concepts. The Mathcad worksheet is amply annotated. Students who have moderate skill with the software and have learned about regression analysis from the curve-fitting worksheets described in this column will be able to complete and understand their analysis of the IR spectrum of HCl. The three Mathcad worksheets described here stretch the physical chemistry curriculum by presenting important topics in forms that students can use with only moderate Mathcad skills. The documents facilitate learning by giving students opportunities to interact with the material in meaningful ways in addition to using the documents as sources of techniques for building their own data-reduction worksheets. However, working through these Mathcad worksheets is not a trivial task for the average student. Support needs to be provided by the instructor to ease students through more advanced mathematical and Mathcad processes. These worksheets raise the question of how much we can ask diligent students to do in one course and how much time they need to spend to master the essential concepts of that course. The Mathcad documents and associated PDF versions are available at the JCE Internet WWW site. The Mathcad documents require Mathcad version 6.0 or higher and the PDF files require Adobe Acrobat. Every effort has been made to make the documents fully compatible across the various Mathcad versions. Users may need to refer to Mathcad manuals for functions that vary with the Mathcad version number. Literature Cited 1. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill: New York, 1969. 2. Zielinski, T. J.; Allendoerfer, R. D. J. Chem. Educ. 1997, 74, 1001. 3. Schwenz, R. W.; Polik, W. F. J. Chem. Educ. 1999, 76, 1302.
Improvements in prevalence trend fitting and incidence estimation in EPP 2013
Brown, Tim; Bao, Le; Eaton, Jeffrey W.; Hogan, Daniel R.; Mahy, Mary; Marsh, Kimberly; Mathers, Bradley M.; Puckett, Robert
2014-01-01
Objective: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. Methods: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. Results: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. Conclusion: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning. PMID:25406747
A User’s Manual for Fiber Diffraction: The Automated Picker and Huber Diffractometers
1990-07-01
17 3. Layer line scan of degummed silk ( Bombyx mori ) ................................. 18...index (arbitrary units) Figure 3. Layer line scan of degummed silk ( Bombyx mori ) showing layers 0 through 6. If the fit is rejected, new values for... originally made at intervals larger than 0.010. The smoothing and interpolation is done by a least-squares polynomial fit to segments of the data. The number
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
NASA Astrophysics Data System (ADS)
Pronyaev, Vladimir G.; Capote, Roberto; Trkov, Andrej; Noguere, Gilles; Wallner, Anton
2017-09-01
An IAEA project to update the Neutron Standards is near completion. Traditionally, the Thermal Neutron Constants (TNC) evaluated data by Axton for thermal-neutron scattering, capture and fission on four fissile nuclei and the total nu-bar of 252Cf(sf) are used as input in the combined least-square fit with neutron cross section standards. The evaluation by Axton (1986) was based on a least-square fit of both thermal-spectrum averaged cross sections (Maxwellian data) and microscopic cross sections at 2200 m/s. There is a second Axton evaluation based exclusively on measured microscopic cross sections at 2200 m/s (excluding Maxwellian data). Both evaluations disagree within quoted uncertainties for fission and capture cross sections and total multiplicities of uranium isotopes. There are two factors, which may lead to such difference: Westcott g-factors with estimated 0.2% uncertainties used in the Axton's fit, and deviation of the thermal spectra from Maxwellian shape. To exclude or mitigate the impact of these factors, a new combined GMA fit of standards was undertaken with Axton's TNC evaluation based on 2200 m/s data used as a prior. New microscopic data at the thermal point, available since 1986, were added to the combined fit. Additionally, an independent evaluation of TNC was undertaken using CONRAD code. Both GMA and CONRAD results are consistent within quoted uncertainties. New evaluation shows a small increase of fission and capture thermal cross sections, and a corresponding decrease in evaluated thermal nubar for uranium isotopes and 239Pu.
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
Stodden, David F; True, Larissa K; Langendorfer, Stephen J; Gao, Zan
2013-09-01
This exploratory study examined the notion of Seefeldt's (1980) hypothesized motor skill "proficiency barrier" related to composite levels of health-related physical fitness (HRF) in young adults. A motor skill competence (MSC) index composed of maximum throwing and kicking speed and jumping distance in 187 young adults aged 18 to 25 years old was evaluated against a composite index of 5 health-related fitness (HRF) test scores. MSC (high, moderate, and low) and HRF indexes (good, fair, and poor) were categorized according to normative fitness percentile ranges. 2 separate 3-way chi-square analyses were conducted to determine the probabilities of skill predicting fitness and fitness predicting skill. Most correlations among HRF and MSC variables by gender demonstrated low-to-moderate positive correlations in both men (12/15; r = .23-.58) and women (14/15; r = .21-.53). Chi-square analyses for the total sample, using composite indexes, demonstrated statistically significant predictive models, chi2(1, N = 187) = 66.99, p < .001, Cramer's V = .42. Only 3.1% of low-skilled (2 of 65) individuals were classified as having a "good" HRF. Only 1 participant (out of 65) who demonstrated high MSC was classified as having "poor" HRF (1.5%). Although individual correlations among individual MSC and HRF measures were low to moderate, these data provide indirect evidence for the possibility of a motor skill "proficiency barrier" as indicated by low composite HRF levels. This study may generate future research to address the proficiency barrier hypothesis in youth as well as adults.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Power and sensitivity of alternative fit indices in tests of measurement invariance.
Meade, Adam W; Johnson, Emily C; Braddy, Phillip W
2008-05-01
Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
Does Positivity Mediate the Relation of Extraversion and Neuroticism with Subjective Happiness?
Lauriola, Marco; Iani, Luca
2015-01-01
Recent theories suggest an important role of neuroticism, extraversion, attitudes, and global positive orientations as predictors of subjective happiness. We examined whether positivity mediates the hypothesized relations in a community sample of 504 adults between the ages of 20 and 60 years old (females = 50%). A model with significant paths from neuroticism to subjective happiness, from extraversion and neuroticism to positivity, and from positivity to subjective happiness fitted the data (Satorra–Bentler scaled chi-square (38) = 105.91; Comparative Fit Index = .96; Non-Normed Fit Index = .95; Root Mean Square Error of Approximation = .060; 90% confidence interval = .046, .073). The percentage of subjective happiness variance accounted for by personality traits was only about 48%, whereas adding positivity as a mediating factor increased the explained amount of subjective happiness to 78%. The mediation model was invariant by age and gender. The results show that the effect of extraversion on happiness was fully mediated by positivity, whereas the effect of neuroticism was only partially mediated. Implications for happiness studies are also discussed. PMID:25781887
49 CFR 385.5 - Safety fitness standard.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Safety fitness standard. 385.5 Section 385.5... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS SAFETY FITNESS PROCEDURES General § 385.5 Safety fitness standard. A motor carrier must meet the safety fitness standard set forth...
49 CFR 385.5 - Safety fitness standard.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Safety fitness standard. 385.5 Section 385.5... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS SAFETY FITNESS PROCEDURES General § 385.5 Safety fitness standard. A motor carrier must meet the safety fitness standard set forth...
Sea surface mean square slope from Ku-band backscatter data
NASA Technical Reports Server (NTRS)
Jackson, F. C.; Walton, W. T.; Hines, D. E.; Walter, B. A.; Peng, C. Y.
1992-01-01
A surface mean-square-slope parameter analysis is conducted for 14-GHz airborne radar altimeter near-nadir, quasi-specular backscatter data, which in raw form obtained by least-squares fitting of an optical scattering model to the return waveform show an approximately linear dependence over the 7-15 m/sec wind speed range. Slope data are used to draw inferences on the structure of the high-wavenumber portion of the spectrum. A directionally-integrated model height spectrum that encompasses wind speed-dependent k exp -5/2 and classical Phillips k exp -3 power laws subranges in the range of gravity waves is supported by the data.
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
Effect of scrape-off-layer current on reconstructed tokamak equilibrium
King, J. R.; Kruger, S. E.; Groebner, R. J.; ...
2017-01-13
Methods are described that extend fields from reconstructed equilibria to include scrape-off-layer current through extrapolated parametrized and experimental fits. The extrapolation includes both the effects of the toroidal-field and pressure gradients which produce scrape-off-layer current after recomputation of the Grad-Shafranov solution. To quantify the degree that inclusion of scrape-off-layer current modifies the equilibrium, the χ-squared goodness-of-fit parameter is calculated for cases with and without scrape-off-layer current. The change in χ-squared is found to be minor when scrape-off-layer current is included however flux surfaces are shifted by up to 3 cm. Here the impact on edge modes of these scrape-off-layer modificationsmore » is also found to be small and the importance of these methods to nonlinear computation is discussed.« less
The rotational elements of Mars and its satellites
NASA Astrophysics Data System (ADS)
Jacobson, R. A.; Konopliv, A. S.; Park, R. S.; Folkner, W. M.
2018-03-01
The International Astronomical Union (IAU) defines planet and satellite coordinate systems relative to their axis of rotation and the angle about that axis. The rotational elements of the bodies are the right ascension and declination of the rotation axis in the International Celestial Reference Frame and the rotation angle, W, measured easterly along the body's equator. The IAU specifies the location of the body's prime meridian by providing a value for W at epoch J2000. We provide new trigonometric series representations of the rotational elements of Mars and its satellites, Phobos and Deimos. The series for Mars are from a least squares fit to the rotation model used to orient the Martian gravity field. The series for the satellites are from a least squares fit to rotation models developed in accordance with IAU conventions from recent ephemerides.
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
Full two-dimensional transient solutions of electrothermal aircraft blade deicing
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.
1985-01-01
Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2018-04-01
This paper describes the merits and demerits of different sensors for measuring propellant gas pressure, the applicable range of the frequently used dynamic pressure calibration methods, and the working principle of absolute quasi-static pressure calibration based on the drop-weight device. The main factors affecting the accuracy of pressure calibration are analyzed from two aspects of the force sensor and the piston area. To calculate the effective area of the piston rod and evaluate the uncertainty between the force sensor and the corresponding peak pressure in the absolute quasi-static pressure calibration process, a method for solving these problems based on the least squares principle is proposed. According to the relevant quasi-static pressure calibration experimental data, the least squares fitting model between the peak force and the peak pressure, and the effective area of the piston rod and its measurement uncertainty, are obtained. The fitting model is tested by an additional group of experiments, and the peak pressure obtained by the existing high-precision comparison calibration method is taken as the reference value. The test results show that the peak pressure obtained by the least squares fitting model is closer to the reference value than the one directly calculated by the cross-sectional area of the piston rod. When the peak pressure is higher than 150 MPa, the percentage difference is less than 0.71%, which can meet the requirements of practical application.
Random analysis of bearing capacity of square footing using the LAS procedure
NASA Astrophysics Data System (ADS)
Kawa, Marek; Puła, Wojciech; Suska, Michał
2016-09-01
In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.
Development of a digital impression procedure using photogrammetry for complete denture fabrication.
Matsuda, Takashi; Goto, Takaharu; Kurahashi, Kosuke; Kashiwabara, Toshiya; Ichikawa, Tetsuo
We developed an innovative procedure for digitizing maxillary edentulous residual ridges with a photogrammetric system capable of estimating three-dimensional (3D) digital forms from multiple two-dimensional (2D) digital images. The aim of this study was to validate the effectiveness of the photogrammetric system. Impressions of the maxillary residual ridges of five edentulous patients were taken with four kinds of procedures: three conventional impression procedures and the photogrammetric system. Plaster models were fabricated from conventional impressions and digitized with a 3D scanner. Two 3D forms out of four forms were superimposed with 3D inspection software, and differences were evaluated using a least squares best fit algorithm. The in vitro experiment suggested that better imaging conditions were in the horizontal range of ± 15 degrees and at a vertical angle of 45 degrees. The mean difference between the photogrammetric image (Form A) and the image taken from conventional preliminarily impression (Form C) was 0.52 ± 0.22 mm. The mean difference between the image taken of final impression through a special tray (Form B) and Form C was 0.26 ± 0.06 mm. The mean difference between the image taken from conventional final impression (Form D) and Form C was 0.25 ± 0.07 mm. The difference between Forms A and C was significantly larger than the differences between Forms B and C and between Forms D and C. The results of this study suggest that obtaining digital impressions of edentulous residual ridges using a photogrammetric system is feasible and available for clinical use.
Eigen model with general fitness functions and degradation rates
NASA Astrophysics Data System (ADS)
Hu, Chin-Kun; Saakian, David B.
2006-03-01
We present an exact solution of Eigen's quasispecies model with a general degradation rate and fitness functions, including a square root decrease of fitness with increasing Hamming distance from the wild type. The found behavior of the model with a degradation rate is analogous to a viral quasi-species under attack by the immune system of the host. Our exact solutions also revise the known results of neutral networks in quasispecies theory. To explain the existence of mutants with large Hamming distances from the wild type, we propose three different modifications of the Eigen model: mutation landscape, multiple adjacent mutations, and frequency-dependent fitness in which the steady state solution shows a multi-center behavior.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M
2006-09-01
The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
SUPPLEMENTARY COMPARISON: EUROMET.L-S10 Comparison of squareness measurements
NASA Astrophysics Data System (ADS)
Mokros, Jiri
2005-01-01
The idea of performing a comparison of squareness resulted from the need to review the MRA Appendix C, Category 90° square. At its meeting in October 1999 (in Prague) it was decided upon a first comparison of squareness measurements in the framework of EUROMET, numbered #570, starting in 2000, with the Slovak Institute of Metrology (SMU) as the pilot laboratory. During the preparation stage of the project, it was agreed that it should be submitted as a EUROMET supplementary comparison in the framework of the Mutual Recognition Arrangement (MRA) of the Metre Convention and would boost confidence in calibration and measurement certificates issued by the participating national metrology institutes. The aim of the comparison of squareness measurement was to compare and verify the declared calibration measurement capabilities of participating laboratories and to investigate the effect of systematic influences in the measurement process and their elimination. Eleven NMIs from the EUROMET region carried out this project. Two standards were calibrated: granite squareness standard of rectangular shape, cylindrical squareness standard of steel with marked positions for the profile lines. The following parameters had to be calibrated: granite squareness standard: interior angle γB between two lines AB and AC (envelope - LS regression) fitted through the measured profiles, and/or granite squareness standard: interior angle γLS between two LS regression lines AB and AC fitted through the measured profiles, cylindrical squareness standard: interior angles γ0°, γ90°, γ180°, γ270° between the LS regression line fitted through the measurement profiles at 0°, 90°, 180°, 270° and the envelope plane of the basis (resting on a surface plate), local LS straightness deviation for all measured profiles (2 and 4) of both standards. The results of the comparison are the deviations of profiles and angles measured by the individual NMIs from the reference values. These resulted from the weighted mean of data from participating laboratories, while some of them were excluded on the basis of statistical evaluation. Graphical interpretations of all deviations are contained in the Final Report. In order to compare the individual deviations mutually (25 profiles for the granite square and 44 profiles for the cylinder), graphical illustrations of 'standard deviations' and both extreme values (max. and min.) of deviations were created. This regional supplementary comparison has provided independent information about the metrological properties of the measuring equipment and method used by the participating NMIs. The Final Report does not contain the En values. Participants could not estimate some contributions in the uncertainty budget on the basis of previous comparisons, since no comparison of this kind had ever been organized. Therefore the En value cannot reflect the actual state of the given NMI. Instead of En, an analysis has been performed by means of the Grubbs test according to ISO 5725-2. This comparison provided information about the state of provision of metrological services in the field of big squares measurement. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EUROMET, according to the provisions of the Mutual Recognition Arrangement (MRA).
Spectral performance of Square Kilometre Array Antennas - II. Calibration performance
NASA Astrophysics Data System (ADS)
Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.
2017-09-01
We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Spreadsheet for designing valid least-squares calibrations: A tutorial.
Bettencourt da Silva, Ricardo J N
2016-02-01
Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Fitting of hearing aids with different technical parameters to a patient with dead regions
NASA Astrophysics Data System (ADS)
Hojan-Jezierska, Dorota; Skrodzka, Ewa
2009-01-01
The purpose of the study was to determine an optimal hearing aid fitting procedure for a patient with well diagnosed high-frequency ‘dead regions’ in both cochleas. The patient reported non-symmetrical hearing problems of sensorineural origin. For binaural amplification two similar independent hearing aids were used as well as a pair of dependent devices with an ear-to-ear function. Two fitting methods were used: DSLi/o and NAL-NL1, and four different strategies of fitting were tested: the initial fitting based on the DSLi/o or NAL-NL1 method with necessary loudness corrections, the second fitting taking into account all the available functions of hearing instruments, the third fitting (based on the second one) but with significantly reduced amplification well above one octave of frequency inside dead region, and the final fitting with significantly reduced gain slightly below one octave inside dead regions. The results of hearing aids fitting were assessed using an APHAB procedure.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Carle, Adam C; Riley, William; Hays, Ron D; Cella, David
2015-10-01
To guide measure development, National Institutes of Health-supported Patient reported Outcomes Measurement Information System (PROMIS) investigators developed a hierarchical domain framework. The framework specifies health domains at multiple levels. The initial PROMIS domain framework specified that physical function and symptoms such as Pain and Fatigue indicate Physical Health (PH); Depression, Anxiety, and Anger indicate Mental Health (MH); and Social Role Performance and Social Satisfaction indicate Social Health (SH). We used confirmatory factor analyses to evaluate the fit of the hypothesized framework to data collected from a large sample. We used data (n=14,098) from PROMIS's wave 1 field test and estimated domain scores using the PROMIS item response theory parameters. We then used confirmatory factor analyses to test whether the domains corresponded to the PROMIS domain framework as expected. A model corresponding to the domain framework did not provide ideal fit [root mean square error of approximation (RMSEA)=0.13; comparative fit index (CFI)=0.92; Tucker Lewis Index (TLI)=0.88; standardized root mean square residual (SRMR)=0.09]. On the basis of modification indices and exploratory factor analyses, we allowed Fatigue to load on both PH and MH. This model fit the data acceptably (RMSEA=0.08; CFI=0.97; TLI=0.96; SRMR=0.03). Our findings generally support the PROMIS domain framework. Allowing Fatigue to load on both PH and MH improved fit considerably.
The long-solved problem of the best-fit straight line: application to isotopic mixing lines
NASA Astrophysics Data System (ADS)
Wehr, Richard; Saleska, Scott R.
2017-01-01
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introduce the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods - ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) - have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general - and convenient - solution is always the least biased.
Health-Related Measures of Children's Physical Fitness.
ERIC Educational Resources Information Center
Pate, Russell R.
1991-01-01
Summarizes health-related physical fitness measurement procedures for children, emphasizing field measures. Health-related physical fitness encompasses cardiorespiratory endurance, body composition, muscular strength and endurance, and flexibility. The article presents several issues pertinent to research on health-related fitness testing. (SM)
Zeng, Chengbo; Li, Linghua; Hong, Yan Alicia; Zhang, Hanxi; Babbitt, Andrew Walker; Liu, Cong; Li, Lixia; Qiao, Jiaying; Guo, Yan; Cai, Weiping
2018-01-15
Previous studies have shown positive association between HIV-related stigma and depression, suicidal ideation, and suicidal attempt among people living with HIV/AIDS (PLWH). But few studies have examined the mechanisms among HIV-related stigma, depression, and suicidal status (suicidal ideation and/or suicidal attempt) in PLWH. The current study examined the relationships among perceived and internalized stigma (PIS), depression, and suicidal status among PLWH in Guangzhou, China using structural equation modeling. Cross-sectional study by convenience sampling was conducted and 411 PLWH were recruited from the Number Eight People's Hospital from March to June, 2013 in Guangzhou, China. Participants were interviewed on their PIS, depressive symptoms, suicidal status, and socio-demographic characteristics. PLWH who had had suicidal ideation and suicidal attempts since HIV diagnosis were considered to be suicidal. Structural equation model was performed to examine the direct and indirect associations of PIS and suicidal status. Indicators to evaluate goodness of fit of the structural equation model included Chi-square Statistic, Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean Square Residual (SRMR), and Weighted Root Mean Square Residual (WRMR). More than one-third (38.4%) of the PLWH had depressive symptoms and 32.4% reported suicidal ideation and/or attempt since HIV diagnosis. The global model showed good model fit (Chi-square value = 34.42, CFI = 0.98, RMSEA = 0.03, WRMR = 0.73). Structural equation model revealed that direct pathway of PIS on suicidal status was significant (standardized pathway coefficient = 0.21), and indirect pathway of PIS on suicidal status via depression was also significant (standardized pathway coefficient = 0.24). There was a partial mediating effect of depression in the association between PIS and suicidal status. Our findings suggest that PIS is associated with increased depression and the likelihood of suicidal status. Depression is in turn positively associated with suicidal status and plays a mediating role between PIS and suicidal status. Therefore, to reduce suicidal ideation and attempt in PLWH, targeted interventions to reduce PIS and improve mental health status of PLWH are warranted.
Performance Prediction of Constrained Waveform Design for Adaptive Radar
2016-11-01
Kullback - Leibler divergence. χ2 Goodness - of - Fit Test We compute the estimated CDF for both models with 10000 MC trials. For Model 1 we observed a p-value of ...was clearly similar in its physical attributes, but the measures used , ( Kullback - Leibler , Chi-Square Test and the trace of the covariance) showed...models goodness - of - fit we look at three measures (1) χ2- Test (2) Trace of the inverse
40 CFR 86.125-94 - Methane analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Additional calibration points may be generated. For each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values...
40 CFR 86.125-94 - Methane analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Additional calibration points may be generated. For each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values...
40 CFR 86.125-94 - Methane analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Additional calibration points may be generated. For each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values...
49 CFR 385.11 - Notification of safety rating and safety fitness determination.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Notification of safety rating and safety fitness... REGULATIONS SAFETY FITNESS PROCEDURES General § 385.11 Notification of safety rating and safety fitness... notice of remedial directive will constitute the notice of safety fitness determination. If FMCSA has not...
49 CFR 385.11 - Notification of safety rating and safety fitness determination.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Notification of safety rating and safety fitness... REGULATIONS SAFETY FITNESS PROCEDURES General § 385.11 Notification of safety rating and safety fitness... notice of remedial directive will constitute the notice of safety fitness determination. If FMCSA has not...
NASA Technical Reports Server (NTRS)
Pratt, Randy
1993-01-01
The Ames Fitness Program services 5,000 civil servants and contractors working at Ames Research Center. A 3,000 square foot fitness center, equipped with cardiovascular machines, weight training machines, and free weight equipment is on site. Thirty exercise classes are held each week at the Center. A weight loss program is offered, including individual exercise prescriptions, fitness testing, and organized monthly runs. The Fitness Center is staffed by one full-time program coordinator and 15 hours per week of part-time help. Membership is available to all employees at Ames at no charge, and there are no fees for participation in any of the program activities. Prior to using the Center, employees must obtain a physical examination and complete a membership package. Funding for the Ames Fitness Program was in jeopardy in December 1992; however, the employees circulated a petition in support of the program and collected more than 1500 signatures in only three days. Funding has been approved through October 1993.
Influence of fitness and physical activity on cardiovascular reactivity to musical performance.
Wasley, David; Taylor, Adrian; Backx, Karianne; Williamon, Aaron
2012-01-01
The current study examines the relationships between physical activity and fitness and reactivity to a musical performance stressor (MPS). Numerous studies suggest that being fitter and more physically active has a beneficial effect on individuals' cardiovascular responses to laboratory-based mental challenges. The results are equivocal regarding the transfer of such benefits to real world contexts such as musical performance. Forty six advanced music students completed this assessment. All participants completed a 20-min pre-performance assessment of heart rate (HR), HR variability (HRV) and blood pressure. Participants also completed baseline measures and a sub-maximal fitness assessment on a separate day. A positive association between fitness and HR pre-MPS was found. Fitness was also positively associated with root mean square SD RR(interval) before the MPS. Higher fitness was related to lower state anxiety post-MPS. Implications of the findings are discussed in relation to classical musicians' day-to-day work and performance.
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
49 CFR 385.11 - Notification of safety fitness determination.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Notification of safety fitness determination. 385.11 Section 385.11 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... REGULATIONS SAFETY FITNESS PROCEDURES General § 385.11 Notification of safety fitness determination. (a) The...
49 CFR 385.11 - Notification of safety fitness determination.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Notification of safety fitness determination. 385.11 Section 385.11 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL... REGULATIONS SAFETY FITNESS PROCEDURES General § 385.11 Notification of safety fitness determination. (a) The...