Sample records for squares fitting techniques

  1. A Simple Formula to Calculate Shallow-Water Transmission Loss by Means of a Least-Squares Surface Fit Technique.

    DTIC Science & Technology

    1980-09-01

    HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the

  2. Comments on Different techniques for finding best-fit parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.; Triplett, Laurie A.

    2014-07-01

    A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.

  3. Chi-squared and C statistic minimization for low count per bin data

    NASA Astrophysics Data System (ADS)

    Nousek, John A.; Shue, David R.

    1989-07-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  4. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  5. Simple and Reliable Determination of Intravoxel Incoherent Motion Parameters for the Differential Diagnosis of Head and Neck Tumors

    PubMed Central

    Sasaki, Miho; Sumi, Misa; Eida, Sato; Katayama, Ikuo; Hotokezaka, Yuka; Nakamura, Takashi

    2014-01-01

    Intravoxel incoherent motion (IVIM) imaging can characterize diffusion and perfusion of normal and diseased tissues, and IVIM parameters are authentically determined by using cumbersome least-squares method. We evaluated a simple technique for the determination of IVIM parameters using geometric analysis of the multiexponential signal decay curve as an alternative to the least-squares method for the diagnosis of head and neck tumors. Pure diffusion coefficients (D), microvascular volume fraction (f), perfusion-related incoherent microcirculation (D*), and perfusion parameter that is heavily weighted towards extravascular space (P) were determined geometrically (Geo D, Geo f, and Geo P) or by least-squares method (Fit D, Fit f, and Fit D*) in normal structures and 105 head and neck tumors. The IVIM parameters were compared for their levels and diagnostic abilities between the 2 techniques. The IVIM parameters were not able to determine in 14 tumors with the least-squares method alone and in 4 tumors with the geometric and least-squares methods. The geometric IVIM values were significantly different (p<0.001) from Fit values (+2±4% and −7±24% for D and f values, respectively). Geo D and Fit D differentiated between lymphomas and SCCs with similar efficacy (78% and 80% accuracy, respectively). Stepwise approaches using combinations of Geo D and Geo P, Geo D and Geo f, or Fit D and Fit D* differentiated between pleomorphic adenomas, Warthin tumors, and malignant salivary gland tumors with the same efficacy (91% accuracy = 21/23). However, a stepwise differentiation using Fit D and Fit f was less effective (83% accuracy = 19/23). Considering cumbersome procedures with the least squares method compared with the geometric method, we concluded that the geometric determination of IVIM parameters can be an alternative to least-squares method in the diagnosis of head and neck tumors. PMID:25402436

  6. Estimation of liver T₂ in transfusion-related iron overload in patients with weighted least squares T₂ IDEAL.

    PubMed

    Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H

    2012-01-01

    MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.

  7. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  8. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  9. Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting

    NASA Astrophysics Data System (ADS)

    Yan, Y. T.; Cai, Y.

    2006-03-01

    A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.

  10. Three Perspectives on Teaching Least Squares

    ERIC Educational Resources Information Center

    Scariano, Stephen M.; Calzada, Maria

    2004-01-01

    The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.

  11. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  12. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  13. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  14. How many spectral lines are statistically significant?

    NASA Astrophysics Data System (ADS)

    Freund, J.

    When experimental line spectra are fitted with least squares techniques one frequently does not know whether n or n + 1 lines may be fitted safely. This paper shows how an F-test can be applied in order to determine the statistical significance of including an extra line into the fitting routine.

  15. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  16. Weighted spline based integration for reconstruction of freeform wavefront.

    PubMed

    Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra

    2018-02-10

    In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.

  17. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  18. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  19. The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models

    NASA Astrophysics Data System (ADS)

    Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin

    2017-06-01

    This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.

  20. Full two-dimensional transient solutions of electrothermal aircraft blade deicing

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.

    1985-01-01

    Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.

  1. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  2. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  3. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  4. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  5. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  6. Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber

    NASA Astrophysics Data System (ADS)

    Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit

    2007-10-01

    This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.

  7. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  8. Enhanced data reduction of the velocity data on CETA flight experiment. [Crew and Equipment Translation Aid

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.; Wong, Douglas T.; Tripp, John S.

    1993-01-01

    A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.

  9. Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada

    NASA Technical Reports Server (NTRS)

    Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea

    1992-01-01

    Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.

  10. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  11. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  12. Computer Calculation of First-Order Rate Constants

    ERIC Educational Resources Information Center

    Williams, Robert C.; Taylor, James W.

    1970-01-01

    Discusses the computer program used to calculate first-order rate constants. Discussion includes data preparation, weighting options, comparison techniques, infinity point adjustment, least-square fit, Guggenheim calculation, and printed outputs. Exemplifies the utility of the computer program by two experiments: (1) the thermal decomposition of…

  13. Interpolating moving least-squares methods for fitting potential energy surfaces: using classical trajectories to explore configuration space.

    PubMed

    Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L

    2009-04-14

    We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.

  14. Application of recursive approaches to differential orbit correction of near Earth asteroids

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

    2016-10-01

    Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.

  15. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  16. Use of high-order spectral moments in Doppler weather radar

    NASA Astrophysics Data System (ADS)

    di Vito, A.; Galati, G.; Veredice, A.

    Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.

  17. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  18. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  19. Quantitative weaknesses of the Marcus-Hush theory of electrode kinetics revealed by Reverse Scan Square Wave Voltammetry: The reduction of 2-methyl-2-nitropropane at mercury microelectrodes

    NASA Astrophysics Data System (ADS)

    Laborda, Eduardo; Wang, Yijun; Henstridge, Martin C.; Martínez-Ortiz, Francisco; Molina, Angela; Compton, Richard G.

    2011-08-01

    The Marcus-Hush and Butler-Volmer kinetic electrode models are compared experimentally by studying the reduction of 2-methyl-2-nitropropane in acetonitrile at mercury microelectrodes using Reverse Scan Square Wave Voltammetry. This technique is found to be very sensitive to the electrode kinetics and to permit critical comparison of the two models. The Butler-Volmer model satisfactorily fits the experimental data whereas Marcus-Hush does not quantitatively describe this redox system.

  20. Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.

    ERIC Educational Resources Information Center

    Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman

    1998-01-01

    Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…

  1. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  2. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  3. Analysis method for Thomson scattering diagnostics in GAMMA 10/PDX.

    PubMed

    Ohta, K; Yoshikawa, M; Yasuhara, R; Chikatsu, M; Shima, Y; Kohagura, J; Sakamoto, M; Nakasima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Minami, T

    2016-11-01

    We have developed an analysis method to improve the accuracies of electron temperature measurement by employing a fitting technique for the raw Thomson scattering (TS) signals. Least square fitting of the raw TS signals enabled reduction of the error in the electron temperature measurement. We applied the analysis method to a multi-pass (MP) TS system. Because the interval between the MPTS signals is very short, it is difficult to separately analyze each Thomson scattering signal intensity by using the raw signals. We used the fitting method to obtain the original TS scattering signals from the measured raw MPTS signals to obtain the electron temperatures in each pass.

  4. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  5. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  6. Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.

    PubMed

    VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T

    2017-06-01

    The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.

  7. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  8. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  9. SU-F-BRD-08: A Novel Technique to Derive a Clinically-Acceptable Beam Model for Proton Pencil-Beam Scanning in a Commercial Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholey, J. E.; Lin, L.; Ainsley, C. G.

    2015-06-15

    Purpose: To evaluate the accuracy and limitations of a commercially-available treatment planning system’s (TPS’s) dose calculation algorithm for proton pencil-beam scanning (PBS) and present a novel technique to efficiently derive a clinically-acceptable beam model. Methods: In-air fluence profiles of PBS spots were modeled in the TPS alternately as single-(SG) and double-Gaussian (DG) functions, based on fits to commissioning data. Uniform-fluence, single-energy-layer square fields of various sizes and energies were calculated with both beam models and delivered to water. Dose was measured at several depths. Motivated by observed discrepancies in measured-versus-calculated dose comparisons, a third model was constructed based on double-Gaussianmore » parameters contrived through a novel technique developed to minimize these differences (DGC). Eleven cuboid-dose-distribution-shaped fields with varying range/modulation and field size were subsequently generated in the TPS, using each of the three beam models described, and delivered to water. Dose was measured at the middle of each spread-out Bragg peak. Results: For energies <160 MeV, the DG model fit square-field measurements to <2% at all depths, while the SG model could disagree by >6%. For energies >160 MeV, both SG and DG models fit square-field measurements to <1% at <4 cm depth, but could exceed 6% deeper. By comparison, disagreement with the DGC model was always <3%. For the cuboid plans, calculation-versus-measured percent dose differences exceeded 7% for the SG model, being larger for smaller fields. The DG model showed <3% disagreement for all field sizes in shorter-range beams, although >5% differences for smaller fields persisted in longer-range beams. In contrast, the DGC model predicted measurements to <2% for all beams. Conclusion: Neither the TPS’s SG nor DG models, employed as intended, are ideally suited for routine clinical use. However, via a novel technique to be presented, its DG model can be tuned judiciously to yield acceptable results.« less

  10. Monte Carlo analysis for the determination of the conic constant of an aspheric micro lens based on a scanning white light interferometric measurement

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon A.; Davies, Angela

    2005-08-01

    Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.

  11. Meta-Analytic Methods of Pooling Correlation Matrices for Structural Equation Modeling under Different Patterns of Missing Data

    ERIC Educational Resources Information Center

    Furlow, Carolyn F.; Beretvas, S. Natasha

    2005-01-01

    Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…

  12. A system identification technique based on the random decrement signatures. Part 2: Experimental results

    NASA Technical Reports Server (NTRS)

    Bedewi, Nabih E.; Yang, Jackson C. S.

    1987-01-01

    Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.

  13. Methods of Fitting a Straight Line to Data: Examples in Water Resources

    USGS Publications Warehouse

    Hirsch, Robert M.; Gilroy, Edward J.

    1984-01-01

    Three methods of fitting straight lines to data are described and their purposes are discussed and contrasted in terms of their applicability in various water resources contexts. The three methods are ordinary least squares (OLS), least normal squares (LNS), and the line of organic correlation (OC). In all three methods the parameters are based on moment statistics of the data. When estimation of an individual value is the objective, OLS is the most appropriate. When estimation of many values is the objective and one wants the set of estimates to have the appropriate variance, then OC is most appropriate. When one wishes to describe the relationship between two variables and measurement error is unimportant, then OC is most appropriate. Where the error is important in descriptive problems or in calibration problems, then structural analysis techniques may be most appropriate. Finally, if the problem is one of describing some geographic trajectory, then LNS is most appropriate.

  14. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  15. Computer program for calculating and fitting thermodynamic functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1992-01-01

    A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.

  16. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  17. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  18. Parametrization of electron impact ionization cross sections for CO, CO2, NH3 and SO2

    NASA Technical Reports Server (NTRS)

    Srivastava, Santosh K.; Nguyen, Hung P.

    1987-01-01

    The electron impact ionization and dissociative ionization cross section data of CO, CO2, CH4, NH3, and SO2, measured in the laboratory, were parameterized utilizing an empirical formula based on the Born approximation. For this purpose an chi squared minimization technique was employed which provided an excellent fit to the experimental data.

  19. Numerical Technique for Analyzing Rotating Rake Mode Measurements in a Duct With Passive Treatment and Shear Flow

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sutliff, Daniel L.

    2007-01-01

    A technique is presented for the analysis of measured data obtained from a rotating microphone rake system. The system is designed to measure the interaction modes of ducted fans. A Fourier analysis of the data from the rotating system results in a set of circumferential mode levels at each radial location of a microphone inside the duct. Radial basis functions are then least-squares fit to this data to obtain the radial mode amplitudes. For ducts with soft walls and mean flow, the radial basis functions must be numerically computed. The linear companion matrix method is used to obtain both the eigenvalues of interest, without an initial guess, and the radial basis functions. The governing equations allow for the mean flow to have a boundary layer at the wall. In addition, a nonlinear least-squares method is used to adjust the wall impedance to best fit the data in an attempt to use the rotating system as an in-duct wall impedance measurement tool. Simulated and measured data are used to show the effects of wall impedance and mean flow on the computed results.

  20. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2015-01-01

    A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.

  1. Robust and efficient pharmacokinetic parameter non-linear least squares estimation for dynamic contrast enhanced MRI of the prostate.

    PubMed

    Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J

    2018-05-01

    To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Modeling Hurricane Katrina's merchantable timber and wood damage in south Mississippi using remotely sensed and field-measured data

    NASA Astrophysics Data System (ADS)

    Collins, Curtis Andrew

    Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.

  3. Dielectric properties of benzylamine in 1,2,6-hexanetriol mixture using time domain reflectometry technique

    NASA Astrophysics Data System (ADS)

    Swami, M. B.; Hudge, P. G.; Pawar, V. P.

    The dielectric properties of binary mixtures of benzylamine-1,2,6-hexantriol mixtures at different volume fractions of 1,2,6-hexanetriol have been measured using Time Domain Reflectometry (TDR) technique in the frequency range of 10 MHz to 30 GHz. Complex permittivity spectra were fitted using Havriliak-Negami equation. By using least square fit method the dielectric parameters such as static dielectric constant (ɛ0), dielectric constant at high frequency (ɛ∞), relaxation time τ (ps) and relaxation distribution parameter (β) were extracted from complex permittivity spectra at 25∘C. The intramolecular interaction of different molecules has been discussed using the Kirkwood correlation factor, Bruggeman factor. The Kirkwood correlation factor (gf) and effective Kirkwood correlation factor (geff) indicate the dipole ordering of the binary mixtures.

  4. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  5. A system identification technique based on the random decrement signatures. Part 1: Theory and simulation

    NASA Technical Reports Server (NTRS)

    Bedewi, Nabih E.; Yang, Jackson C. S.

    1987-01-01

    Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.

  6. Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting

    NASA Astrophysics Data System (ADS)

    Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen

    2017-06-01

    The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.

  7. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  8. An Interactive Computer Package for Use with Simulation Models Which Performs Multidimensional Sensitivity Analysis by Employing the Techniques of Response Surface Methodology.

    DTIC Science & Technology

    1984-12-01

    total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni

  9. Wing Shape Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.

  10. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  11. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  12. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  13. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  14. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  15. A method for modeling discontinuities in a microwave coaxial transmission line

    NASA Technical Reports Server (NTRS)

    Otoshi, T. Y.

    1992-01-01

    A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.

  16. A method for modeling discontinuities in a microwave coaxial transmission line

    NASA Astrophysics Data System (ADS)

    Otoshi, T. Y.

    1992-08-01

    A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.

  17. Four-Dimensional Golden Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2015-02-25

    The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less

  18. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  19. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  20. Sediment data sources and estimated annual suspended-sediment loads of rivers and streams in Colorado

    USGS Publications Warehouse

    Elliott, J.G.; DeFeyter, K.L.

    1986-01-01

    Sources of sediment data collected by several government agencies through water year 1984 are summarized for Colorado. The U.S. Geological Survey has collected suspended-sediment data at 243 sites; these data are stored in the U.S. Geological Survey 's water data storage and retrieval system. The U.S. Forest Service has collected suspended-sediment and bedload data at an additional 225 sites, and most of these data are stored in the U.S. Environmental Protection Agency 's water-quality-control information system. Additional unpublished sediment data are in the possession of the collecting entities. Annual suspended-sediment loads were computed for 133 U.S. Geological Survey sediment-data-collection sites using the daily mean water-discharge/sediment-transport-curve method. Sediment-transport curves were derived for each site by one of three techniques: (1) Least-squares linear regression of all pairs of suspended-sediment and corresponding water-discharge data, (2) least-squares linear regression of data sets subdivided on the basis of hydrograph season; and (3) graphical fit to a logarithm-logarithm plot of data. The curve-fitting technique used for each site depended on site-specific characteristics. Sediment-data sources and estimates of annual loads of suspended, bed, and total sediment from several other reports also are summarized. (USGS)

  1. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

    NASA Technical Reports Server (NTRS)

    Crowley, J. K.; Clark, R. N.

    1992-01-01

    Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

  2. The stress intensity factor for the double cantilever beam

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    1983-01-01

    Fourier transforms and the Wiener-Hopf technique are used in conjunction with plane elastostatics to examine the singular crack tip stress field in the double cantilever beam (DCB) specimen. In place of the Dirac delta function, a family of functions which duplicates the important features of the concentrated forces without introducing unmanageable mathematical complexities is used as a loading function. With terms of order h-squared/a-squared retained in the series expansion, the dimensionless stress intensity factor is found to be K (h to the 1/2)/P = 12 to the 1/2 (a/h + 0.6728 + 0.0377 h-squared/a-squared), in which P is the magnitude of the concentrated forces per unit thickness, a is the distance from the crack tip to the points of load application, and h is the height of each cantilever beam. The result is similar to that obtained by Gross and Srawley by fitting a line to discrete results from their boundary collocation analysis.

  3. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  4. Semivariogram modeling by weighted least squares

    USGS Publications Warehouse

    Jian, X.; Olea, R.A.; Yu, Y.-S.

    1996-01-01

    Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.

  5. Recognition of a porphyry system using ASTER data in Bideghan - Qom province (central of Iran)

    NASA Astrophysics Data System (ADS)

    Feizi, F.; Mansouri, E.

    2014-07-01

    The Bideghan area is located south of the Qom province (central of Iran). The most impressive geological features in the studied area are the Eocene sequences which are intruded by volcanic rocks with basic compositions. Advanced Space borne Thermal Emission and Reflection Radiometer (ASTER) image processing have been used for hydrothermal alteration mapping and lineaments identification in the investigated area. In this research false color composite, band ratio, Principal Component Analysis (PCA), Least Square Fit (LS-Fit) and Spectral Angel Mapping (SAM) techniques were applied on ASTER data and argillic, phyllic, Iron oxide and propylitic alteration zones were separated. Lineaments were identified by aid of false color composite, high pass filters and hill-shade DEM techniques. The results of this study demonstrate the usefulness of remote sensing method and ASTER multi-spectral data for alteration and lineament mapping. Finally, the results were confirmed by field investigation.

  6. University Capstone Project: Enhanced Initiation Techniques for Thermochemical Energy Conversion

    DTIC Science & Technology

    2013-03-01

    technologies such as scramjets, gas turbine engines (relight and afterburner ignition), and pulsed detonation engines ( PDEs ) because of the limited...events in a flow tube were recorded, and the PDE engine was fired while monitoring ignition time and wave speed throughout the detonation process...long steel tube fitted with a 36” long, 2” x 2” square polycarbonate test section is used in place of the instrumented detonation tube. The PDE

  7. Fit between Africa and Antarctica: A Continental Drift Reconstruction.

    PubMed

    Dietz, R S; Sproll, W P

    1970-03-20

    A computerized (smallest average misfit) best fit position is obtained for the juxtaposition of Africa and Antarctica in a continental drift reconstruction. An S-shaped portion of the Weddell and Princess Martha Coast regions of western East Antarctica is fitted into a similar profile along southeastern Africa. The total amount of overlap is 36,300 square kilometers, and the underlap is 23,600 square kilometers; the total mismatch is thus of 59,900 square kilometers. The congruency along the 1000-fathom isobath is remarkably good and suggests that this reconstruction is valid within the overall framework of the Gondwana supercontinent.

  8. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  9. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  10. TH-E-BRF-09: Gaussian Mixture Model Analysis of Radiation-Induced Parotid-Gland Injury: An Ultrasound Study of Acute and Late Xerostomia in Head-And-Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Yu, D; Beitler, J

    Purpose: Xerostomia (dry mouth), secondary to parotid-gland injury, is a distressing side-effect in head-and-neck radiotherapy (RT). This study's purpose is to develop a novel ultrasound technique to quantitatively evaluate post-RT parotid-gland injury. Methods: Recent ultrasound studies have shown that healthy parotid glands exhibit homogeneous echotexture, whereas post-RT parotid glands are often heterogeneous, with multiple hypoechoic (inflammation) or hyperechoic (fibrosis) regions. We propose to use a Gaussian mixture model to analyze the ultrasonic echo-histogram of the parotid glands. An IRB-approved clinical study was conducted: (1) control-group: 13 healthy-volunteers, served as the control; (2) acutetoxicity group − 20 patients (mean age: 62.5more » ± 8.9 years, follow-up: 2.0±0.8 months); and (3) late-toxicity group − 18 patients (mean age: 60.7 ± 7.3 years, follow-up: 20.1±10.4 months). All patients experienced RTOG grade 1 or 2 salivary-gland toxicity. Each participant underwent an ultrasound scan (10 MHz) of the bilateral parotid glands. An echo-intensity histogram was derived for each parotid and a Gaussian mixture model was used to fit the histogram using expectation maximization (EM) algorithm. The quality of the fitting was evaluated with the R-squared value. Results: (1) Controlgroup: all parotid glands fitted well with one Gaussian component, with a mean intensity of 79.8±4.9 (R-squared>0.96). (2) Acute-toxicity group: 37 of the 40 post-RT parotid glands fitted well with two Gaussian components, with a mean intensity of 42.9±7.4, 73.3±12.2 (R-squared>0.95). (3) Latetoxicity group: 32 of the 36 post-RT parotid fitted well with 3 Gaussian components, with mean intensities of 49.7±7.6, 77.2±8.7, and 118.6±11.8 (R-squared>0.98). Conclusion: RT-associated parotid-gland injury is common in head-and-neck RT, but challenging to assess. This work has demonstrated that the Gaussian mixture model of the echo-histogram could quantify acute and late toxicity of the parotid glands. This study provides meaningful preliminary data from future observational and interventional clinical research.« less

  11. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  12. Uranium, radium and thorium in soils with high-resolution gamma spectroscopy, MCNP-generated efficiencies, and VRF non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Metzger, Robert; Riper, Kenneth Van; Lasche, George

    2017-09-01

    A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.

  13. Modelling local GPS/levelling geoid undulations using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Saka, M. H.

    2005-04-01

    The use of GPS for establishing height control in an area where levelling data are available can involve the so-called GPS/levelling technique. Modelling of the GPS/levelling geoid undulations has usually been carried out using polynomial surface fitting, least-squares collocation (LSC) and finite-element methods. Artificial neural networks (ANNs) have recently been used for many investigations, and proven to be effective in solving complex problems represented by noisy and missing data. In this study, a feed-forward ANN structure, learning the characteristics of the training data through the back-propagation algorithm, is employed to model the local GPS/levelling geoid surface. The GPS/levelling geoid undulations for Istanbul, Turkey, were estimated from GPS and precise levelling measurements obtained during a field study in the period 1998-99. The results are compared to those produced by two well-known conventional methods, namely polynomial fitting and LSC, in terms of root mean square error (RMSE) that ranged from 3.97 to 5.73 cm. The results show that ANNs can produce results that are comparable to polynomial fitting and LSC. The main advantage of the ANN-based surfaces seems to be the low deviations from the GPS/levelling data surface, which is particularly important for distorted levelling networks.

  14. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  15. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  16. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.

  17. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  18. The organization of irrational beliefs in posttraumatic stress symptomology: testing the predictions of REBT theory using structural equation modelling.

    PubMed

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p < .001; standardized root mean square residual = .05 (confidence interval = .04-.05); standardized root mean square residual = .04; comparative fit index = .95; Tucker Lewis index = .95. Results demonstrated that demandingness beliefs indirectly affected the various symptom groups of PTSD through a set of secondary irrational beliefs that include catastrophizing, low frustration tolerance, and depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.

  19. Scintillation Control for Adaptive Optical Sensors

    DTIC Science & Technology

    1999-09-21

    defining where one influence function goes to zero fall directly under the peaks of the adjoining influcence functions. These actuators were fit to ^>gp(i...not orthogonal the influence function interaction matrix R must be computed with elements given by [3] rH = J dxPW(xp)e/b(xp)e,(xp). (22) In our...control signals can be found from the wave front phase by the least squares phase reconstruction technique [3]. An influence function and the

  20. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  1. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  2. Techniques for estimating selected streamflow characteristics of rural unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Whitehead, Matthew T.

    2002-01-01

    This report provides equations for estimating mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and streamflow quartiles (the 25th-, 50th-, and 75th-percentile streamflows) as a function of selected basin characteristics for rural, unregulated streams in Ohio. The equations were developed from streamflow statistics and basin-characteristics data for as many as 219 active or discontinued streamflow-gaging stations on rural, unregulated streams in Ohio with 10 or more years of homogenous daily streamflow record. Streamflow statistics and basin-characteristics data for the 219 stations are presented in this report. Simple equations (based on drainage area only) and best-fit equations (based on drainage area and at least two other basin characteristics) were developed by means of ordinary least-squares regression techniques. Application of the best-fit equations generally involves quantification of basin characteristics that require or are facilitated by use of a geographic information system. In contrast, the simple equations can be used with information that can be obtained without use of a geographic information system; however, the simple equations have larger prediction errors than the best-fit equations and exhibit geographic biases for most streamflow statistics. The best-fit equations should be used instead of the simple equations whenever possible.

  3. Feasibility study on the least square method for fitting non-Gaussian noise data

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  4. Why Might Relative Fit Indices Differ between Estimators?

    ERIC Educational Resources Information Center

    Weng, Li-Jen; Cheng, Chung-Ping

    1997-01-01

    Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…

  5. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  6. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less

  7. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  8. Measuring molecular motions inside single cells with improved analysis of single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Rowland, David J.; Biteen, Julie S.

    2017-04-01

    Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.

  9. Wall-wake velocity profile for compressible non-adiabatic flows

    NASA Technical Reports Server (NTRS)

    Sun, C. C.; Childs, M. E.

    1975-01-01

    A form of the wall-wake profile, which is applicable to flows with heat transfer, and for which a variation in y = O at y = delta, was suggested. The modified profile, which takes into account the effect of turbulent Prandtl number, was found to provide a good representation of experimental data for a wide range numbers and heat transfer. The Cf values which are determined by a least squares fit of the profile to the data agree well with values which were measured by the floating element technique. In addition, the values of delta determined by the fit correspond more closely to the outer edge of the viscous flow region than those obtained with earlier versions of the wall-wake profile.

  10. Vector magnetic fields in sunspots. I - Stokes profile analysis using the Marshall Space Flight Center magnetograph

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, K. S.; West, E. A.

    1991-01-01

    The Marshall Space Flight Center (MSFC) vector magnetograph is a tunable filter magnetograph with a bandpass of 125 mA. Results are presented of the inversion of Stokes polarization profiles observed with the MSFC vector magnetograph centered on a sunspot to recover the vector magnetic field parameters and thermodynamic parameters of the spectral line forming region using the Fe I 5250.2 A spectral line using a nonlinear least-squares fitting technique. As a preliminary investigation, it is also shown that the recovered thermodynamic parameters could be better understood if the fitted parameters like Doppler width, opacity ratio, and damping constant were broken down into more basic quantities like temperature, microturbulent velocity, or density parameter.

  11. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  12. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  13. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  14. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  15. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  16. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    NASA Astrophysics Data System (ADS)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  17. Relationship of Interplanetary Shock Micro and Macro Characteristics: A Wind Study

    NASA Technical Reports Server (NTRS)

    Szabo, Adam; Koval, A

    2008-01-01

    The non-linear least squared MHD fitting technique of Szabo 11 9941 has been recently further refined to provide realistic confidence regions for interplanetary shock normal directions and speeds. Analyzing Wind observed interplanetary shocks from 1995 to 200 1, macro characteristics such as shock strength, Theta Bn and Mach numbers can be compared to the details of shock micro or kinetic structures. The now commonly available very high time resolution (1 1 or 22 vectors/sec) Wind magnetic field data allows the precise characterization of shock kinetic structures, such as the size of the foot, ramp, overshoot and the duration of damped oscillations on either side of the shock. Detailed comparison of the shock micro and macro characteristics will be given. This enables the elucidation of shock kinetic features, relevant for particle energization processes, for observations where high time resolution data is not available. Moreover, establishing a quantitative relationship between the shock micro and macro structures will improve the confidence level of shock fitting techniques during disturbed solar wind conditions.

  18. Detection of Tetracycline in Milk using NIR Spectroscopy and Partial Least Squares

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Xu, Chenshan; Yang, Renjie; Ji, Xinning; Liu, Xinyuan; Yang, Fan; Zeng, Ming

    2018-02-01

    The feasibility of measuring tetracycline in milk was investigated by near infrared (NIR) spectroscopic technique combined with partial least squares (PLS) method. The NIR transmittance spectra of 40 pure milk samples and 40 tetracycline adulterated milk samples with different concentrations (from 0.005 to 40 mg/L) were obtained. The pure milk and tetracycline adulterated milk samples were properly assigned to the categories with 100% accuracy in the calibration set, and the rate of correct classification of 96.3% was obtained in the prediction set. For the quantitation of tetracycline in adulterated milk, the root mean squares errors for calibration and prediction models were 0.61 mg/L and 4.22 mg/L, respectively. The PLS model had good fitting effect in calibration set, however its predictive ability was limited, especially for low tetracycline concentration samples. Totally, this approach can be considered as a promising tool for discrimination of tetracycline adulterated milk, as a supplement to high performance liquid chromatography.

  19. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    NASA Astrophysics Data System (ADS)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  20. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  1. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  2. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  3. An analytical technique for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1980-01-01

    An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.

  4. Retransformation bias in a stem profile model

    Treesearch

    Raymond L. Czaplewski; David Bruce

    1990-01-01

    An unbiased profile model, fit to diameter divided by diameter at breast height, overestimated volume of 5.3-m log sections by 0.5 to 3.5%. Another unbiased profile model, fit to squared diameter divided by squared diameter at breast height, underestimated bole diameters by 0.2 to 2.1%. These biases are caused by retransformation of the predicted dependent variable;...

  5. TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis

    NASA Astrophysics Data System (ADS)

    Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.

    2014-10-01

    As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.

  6. Near Infrared Laser Spectroscopy of Scandium Monobromide

    NASA Astrophysics Data System (ADS)

    Xia, Ye; Cheung, A. S.-C.; Liao, Zhenwu; Yang, Mei; Chan, Man-Chor

    2012-06-01

    High resolution laser spectrum of scandium monobromide (ScBr) between 787 and 845 nm has been investigated using the technique of laser vaporization/reaction with free jet expansion and laser induced fluorescence spectroscopy. ScBr was produced by reacting laser vaporized Sc atoms with ethyl bromide (C2H5Br). Spectra of six vibrational bands of both Sc79Br and Sc81Br isotopomers of the C1 Σ+ - X1 Σ+ transition and seven vibrational bands of the e3 Δ - a3 Δ transition were obtained and analyzed. Least-squares fit of the measured line positions for the singlet transitions yielded accurate molecular constants for the v = 0 - 3 levels of the C1 Σ+ state and the v = 0 - 2 levels of the X1 Σ+ state. Similar least-squares fit for the triplet transitions yielded molecular constants for the v = 0 - 2 levels of both e3 Δ and a3 Δ states. The equilibrium bond length, r_0, of the a3 Δ state has been determined to be 2.4789 Å. Financial support from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU 701008P) is gratefully acknowledged

  7. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  8. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.

  9. Atmospheric particulate analysis using angular light scattering

    NASA Technical Reports Server (NTRS)

    Hansen, M. Z.

    1980-01-01

    Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.

  10. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  11. Calculation of Hammett Equation parameters for some N,N‧-bis (substituted-phenyl)-1,4-quinonediimines by density functional theory

    NASA Astrophysics Data System (ADS)

    Sein, Lawrence T.

    2011-08-01

    Hammett parameters σ' were determined from vertical ionization potentials, vertical electron affinities, adiabatic ionization potentials, adiabatic electron affinities, HOMO, and LUMO energies of a series of N, N' -bis (3',4'-substituted-phenyl)-1,4-quinonediimines computed at the B3LYP/6-311+G(2d,p) level on B3LYP/6-31G ∗ molecular geometries. These parameters were then least squares fit as a function of literature Hammett parameters. For N, N' -bis (4'-substituted-phenyl)-1,4-quinonediimines, the least squares fits demonstrated excellent linearity, with the square of Pearson's correlation coefficient ( r2) greater than 0.98 for all isomers. For N, N' -bis (3'-substituted-3'-aminophenyl)-1,4-quinonediimines, the least squares fits were less nearly linear, with r2 approximately 0.70 for all isomers when derived from calculated vertical ionization potentials, but those from calculated vertical electron affinities usually greater than 0.90.

  12. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  13. Curve fits of predicted inviscid stagnation-point radiative heating rates, cooling factors, and shock standoff distances for hyperbolic earth entry

    NASA Technical Reports Server (NTRS)

    Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.

    1974-01-01

    Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.

  14. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  15. A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data

    NASA Astrophysics Data System (ADS)

    Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.

    2016-09-01

    Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.

  16. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  17. A feasibility study on the measurement of tree trunks in forests using multi-scale vertical images

    NASA Astrophysics Data System (ADS)

    Berveglieri, A.; Oliveira, R. O.; Tommaselli, A. M. G.

    2014-06-01

    The determination of the Diameter at Breast Height (DBH) is an important variable that contributes to several studies on forest, e.g., environmental monitoring, tree growth, volume of wood, and biomass estimation. This paper presents a preliminary technique for the measurement of tree trunks using terrestrial images collected with a panoramic camera in nadir view. A multi-scale model is generated with these images. Homologue points on the trunk surface are measured over the images and their ground coordinates are determined by intersection of rays. The resulting XY coordinates of each trunk, defining an arc shape, can be used as observations in a circle fitting by least squares. Then, the DBH of each trunk is calculated using an estimated radius. Experiments were performed in two urban forest areas to assess the approach. In comparison with direct measurements on the trunks taken with a measuring tape, the discrepancies presented a Root Mean Square Error (RMSE) of 1.8 cm with a standard deviation of 0.7 cm. These results demonstrate compatibility with manual measurements and confirm the feasibility of the proposed technique.

  18. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  19. Nonlinear least-squares data fitting in Excel spreadsheets.

    PubMed

    Kemmer, Gerdi; Keller, Sandro

    2010-02-01

    We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.

  20. Direct conversion of rheological compliance measurements into storage and loss moduli.

    PubMed

    Evans, R M L; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A

    2009-07-01

    We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [G'(omega) and G''(omega), respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.

  1. Direct conversion of rheological compliance measurements into storage and loss moduli

    NASA Astrophysics Data System (ADS)

    Evans, R. M. L.; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A.

    2009-07-01

    We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [ G'(ω) and G″(ω) , respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.

  2. Techniques for obtaining regional radiation budgets from satellite radiometer observations, phase 4 and phase 5. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Pina, J. F.; House, F. B.

    1976-01-01

    A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.

  3. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    PubMed

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  4. Flash spectroscopy of purple membrane.

    PubMed Central

    Xie, A H; Nagle, J F; Lozier, R H

    1987-01-01

    Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488

  5. Flash spectroscopy of purple membrane.

    PubMed

    Xie, A H; Nagle, J F; Lozier, R H

    1987-04-01

    Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.

  6. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  7. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  8. 46 CFR 108.447 - Piping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...

  9. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  10. 46 CFR 108.447 - Piping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...

  11. 46 CFR 108.447 - Piping.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...

  12. 46 CFR 108.447 - Piping.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...

  13. 46 CFR 108.447 - Piping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and fitting in a CO2 system must have a bursting pressure of at least 420 kilograms per square centimeter (6,000 pounds per square inch). (b) All piping for a CO2 system of nominal size of 19.05... between 168 and 196 kilograms per square centimeter (2,400 and 2,800 pounds per square inch) in the...

  14. Reliability and validity in measurement of true humeral retroversion by a three-dimensional cylinder fitting method.

    PubMed

    Saka, Masayuki; Yamauchi, Hiroki; Hoshi, Kenji; Yoshioka, Toru; Hamada, Hidetoshi; Gamada, Kazuyoshi

    2015-05-01

    Humeral retroversion is defined as the orientation of the humeral head relative to the distal humerus. Because none of the previous methods used to measure humeral retroversion strictly follow this definition, values obtained by these techniques vary and may be biased by morphologic variations of the humerus. The purpose of this study was 2-fold: to validate a method to define the axis of the distal humerus with a virtual cylinder and to establish the reliability of 3-dimensional (3D) measurement of humeral retroversion by this cylinder fitting method. Humeral retroversion in 14 baseball players (28 humeri) was measured by the 3D cylinder fitting method. The root mean square error was calculated to compare values obtained by a single tester and by 2 different testers using the embedded coordinate system. To establish the reliability, intraclass correlation coefficient (ICC) and precision (standard error of measurement [SEM]) were calculated. The root mean square errors for the humeral coordinate system were <1.0 mm/1.0° for comparison of all translations/rotations obtained by a single tester and <1.0 mm/2.0° for comparison obtained by 2 different testers. Assessment of reliability and precision of the 3D measurement of retroversion yielded an intratester ICC of 0.99 (SEM, 1.0°) and intertester ICC of 0.96 (SEM, 2.8°). The error in measurements obtained by a distal humerus cylinder fitting method was small enough not to affect retroversion measurement. The 3D measurement of retroversion by this method provides excellent intratester and intertester reliability. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. Interpretation of the Coefficients in the Fit y = at + bx + c

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2006-01-01

    The goals of this note are to derive formulas for the coefficients a and b in the least-squares regression plane y = at + bx + c for observations (t[subscript]i,x[subscript]i,y[subscript]i), i = 1, 2, ..., n, and to present meanings for the coefficients a and b. In this note, formulas for the coefficients a and b in the least-squares fit are…

  16. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, J. L.; Britton, R. E.; Abrecht, D. G.

    The acquisition of time-stamped list (TLIST) data provides additional information useful to gamma-spectrometry analysis. A novel technique is described that uses non-linear least-squares fitting and the Levenberg-Marquardt algorithm to simultaneously determine parent-daughter atoms from time sequence measurements of only the daughter radionuclide. This has been demonstrated for the radioactive decay of short-lived radon progeny (214Pb/214Bi, 212Pb/212Bi) described using the Bateman first-order differential equation. The calculated atoms are in excellent agreement with measured atoms, with a difference of 1.3 – 4.8% for parent atoms and 2.4% - 10.4% for daughter atoms. Measurements are also reported with reduced uncertainty. The technique hasmore » potential to redefine gamma-spectrometry analysis.« less

  18. Development of a winter wheat adjustable crop calendar model. [Colorado, Idaho, Oklahoma, Montana, Kansas, Missouri, North Dakota and Texas

    NASA Technical Reports Server (NTRS)

    Baker, J. R. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.

  19. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  20. Mean gravity anomalies and sea surface heights derived from GEOS-3 altimeter data

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1978-01-01

    Approximately 2000 GEOS-3 altimeter arcs were analyzed to improve knowledge of the geoid and gravity field. An adjustment procedure was used to fit the sea surface heights (geoid undulations) in an adjustment process that incorporated cross-over constraints. The error model used for the fit was a one or two parameter model which was designed to remove altimeter bias and orbit error. The undulations on the adjusted arcs were used to produce geoid maps in 20 regions. The adjusted data was used to derive 301 5 degree equal area anomalies and 9995 1 x 1 degree anomalies in areas where the altimeter data was most dense, using least squares collocation techniques. Also emphasized was the ability of the altimeter data to imply rapid anomaly changes of up to 240 mgals in adjacent 1 x 1 degree blocks.

  1. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  2. Treatment of late time instabilities in finite-difference EMP scattering codes

    NASA Astrophysics Data System (ADS)

    Simpson, L. T.; Holland, R.; Arman, S.

    1982-12-01

    Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.

  3. LIF and emission studies of copper and nitrogen

    NASA Technical Reports Server (NTRS)

    Akundi, Murty A.

    1990-01-01

    A technique is developed to determine the rotational temperature of nitrogen molecular ion, N2(+), from the emission spectra of B-X transition, when P and R branches are not resolved. Its validity is tested on simulated spectra of the 0-1 band of N2(+) produced under low resolution. The method is applied to experimental spectra of N2(+) taken in the shock layer of a blunt body at distances of 1.91, 2.54, and 3.18 cm from the body. The laser induced fluorescence (LIF) spectra of copper atoms is analyzed to obtain the free stream velocities and temperatures. The only broadening mechanism considered is Doppler broadening. The temperatures are obtained by manual curve fitting, and the results are compared with least square fits. The agreement on the average is within 10 percent.

  4. The Sixth Spectrum of Iridium (Ir VI): Determination of the 5d4, 5d36s and 5d36p Configurations

    NASA Astrophysics Data System (ADS)

    Azarov, V. I.; Gayasov, R. R.; Gayasov, R. R.; Joshi, Y. N.; Churilov, S. S.

    The spectrum of five times ionized iridium, Ir VI, was investigated in the 420-1520 Å wavelength region. The analysis has led to the determination of the 5d4, 5d36s and 5d36p configurations. Thirty of thirty four theoretically possible 5d4 levels, 27 of 38 possible 5d36s levels and 96 of 110 possible 5d36p levels have been established. The levels are based on 711 classified spectral lines. The level structure of the configurations has been theoretically interpreted using the orthogonal operators technique. The energy parameters have been determined by a least squares fit to the observed levels. Calculated energy values and LS-compositions, obtained from the fitted parameter values are given.

  5. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  6. Monte Carlo analysis of neutron diffuse scattering data

    NASA Astrophysics Data System (ADS)

    Goossens, D. J.; Heerdegen, A. P.; Welberry, T. R.; Gutmann, M. J.

    2006-11-01

    This paper presents a discussion of a technique developed for the analysis of neutron diffuse scattering data. The technique involves processing the data into reciprocal space sections and modelling the diffuse scattering in these sections. A Monte Carlo modelling approach is used in which the crystal energy is a function of interatomic distances between molecules and torsional rotations within molecules. The parameters of the model are the spring constants governing the interactions, as they determine the correlations which evolve when the model crystal structure is relaxed at finite temperature. When the model crystal has reached equilibrium its diffraction pattern is calculated and a χ2 goodness-of-fit test between observed and calculated data slices is performed. This allows a least-squares refinement of the fit parameters and so automated refinement can proceed. The first application of this methodology to neutron, rather than X-ray, data is outlined. The sample studied was deuterated benzil, d-benzil, C14D10O2, for which data was collected using time-of-flight Laue diffraction on SXD at ISIS.

  7. Mathcad in the Chemistry Curriculum Symbolic Software in the Chemistry Curriculum

    NASA Astrophysics Data System (ADS)

    Zielinski, Theresa Julia

    2000-05-01

    Physical chemistry is such a broad discipline that the topics we expect average students to complete in two semesters usually exceed their ability for meaningful learning. Consequently, the number and kind of topics and the efficiency with which students can learn them are important concerns. What topics are essential and what can we do to provide efficient and effective access to those topics? How do we accommodate the fact that students come to upper-division chemistry courses with a variety of nonuniformly distributed skills, a bit of calculus, and some physics studied one or more years before physical chemistry? The critical balance between depth and breadth of learning in courses and curricula may be achieved through appropriate use of technology and especially through the use of symbolic mathematics software. Software programs such as Mathcad, Mathematica, and Maple, however, have learning curves that diminish their effectiveness for novices. There are several ways to address the learning curve conundrum. First, basic instruction in the software provided during laboratory sessions should be followed by requiring laboratory reports that use the software. Second, one should assign weekly homework that requires the software and builds student skills within the discipline and with the software. Third, a complementary method, supported by this column, is to provide students with Mathcad worksheets or templates that focus on one set of related concepts and incorporate a variety of features of the software that they are to use to learn chemistry. In this column we focus on two significant topics for young chemists. The first is curve-fitting and the statistical analysis of the fitting parameters. The second is the analysis of the rotation/vibration spectrum of a diatomic molecule, HCl. A broad spectrum of Mathcad documents exists for teaching chemistry. One collection of 50 documents can be found at http://www.monmouth.edu/~tzielins/mathcad/Lists/index.htm. Another collection of peer-reviewed documents is developing through this column at the JCE Internet Web site, http://jchemed.chem.wisc.edu/JCEWWW/Features/ McadInChem/index.html. With this column we add three peer-reviewed and tested Mathcad documents to the JCE site. In Linear Least-Squares Regression, Sidney H. Young and Andrzej Wierzbicki demonstrate various implicit and explicit methods for determining the slope and intercept of the regression line for experimental data. The document shows how to determine the standard deviation for the slope, the intercept, and the standard deviation of the overall fit. Students are next given the opportunity to examine the confidence level for the fit through the Student's t-test. Examination of the residuals of the fit leads students to explore the possibility of rejecting points in a set of data. The document concludes with a discussion of and practice with adding a quadratic term to create a polynomial fit to a set of data and how to determine if the quadratic term is statistically significant. There is full documentation of the various steps used throughout the exposition of the statistical concepts. Although the statistical methods presented in this worksheet are generally accessible to average physical chemistry students, an instructor would be needed to explain the finer points of the matrix methods used in some sections of the worksheet. The worksheet is accompanied by a set of data for students to use to practice the techniques presented. It would be worthwhile for students to spend one or two laboratory periods learning to use the concepts presented and then to apply them to experimental data they have collected for themselves. Any linear or linearizable data set would be appropriate for use with this Mathcad worksheet. Alternatively, instructors may select sections of the document suited to the skill level of their students and the laboratory tasks at hand. In a second Mathcad document, Non-Linear Least-Squares Regression, Young and Wierzbicki introduce the basic concepts of nonlinear curve-fitting and develop the techniques needed to fit a variety of mathematical functions to experimental data. This approach is especially important when mathematical models for chemical processes cannot be linearized. In Mathcad the Levenberg-Marquardt algorithm is used to determine the best fitting parameters for a particular mathematical model. As in linear least-squares, the goal of the fitting process is to find the values for the fitting parameters that minimize the sum of the squares of the deviations between the data and the mathematical model. Students are asked to determine the fitting parameters, use the Hessian matrix to compute the standard deviation of the fitting parameters, test for the significance of the parameters using Student's t-test, use residual analysis to test for data points to remove, and repeat the calculations for another set of data. The nonlinear least-squares procedure follows closely on the pattern set up for linear least-squares by the same authors (see above). If students master the linear least-squares worksheet content they will be able to master the nonlinear least-squares technique (see also refs 1, 2). In the third document, The Analysis of the Vibrational Spectrum of a Linear Molecule by Richard Schwenz, William Polik, and Sidney Young, the authors build on the concepts presented in the curve fitting worksheets described above. This vibrational analysis document, which supports a classic experiment performed in the physical chemistry laboratory, shows how a Mathcad worksheet can increase the efficiency by which a set of complicated manipulations for data reduction can be made more accessible for students. The increase in efficiency frees up time for students to develop a fuller understanding of the physical chemistry concepts important to the interpretation of spectra and understanding of bond vibrations in general. The analysis of the vibration/rotation spectrum for a linear molecule worksheet builds on the rich literature for this topic (3). Before analyzing their own spectral data, students practice and learn the concepts and methods of the HCl spectral analysis by using the fundamental and first harmonic vibrational frequencies provided by the authors. This approach has a fundamental pedagogical advantage. Most explanations in laboratory texts are very concise and lack mathematical details required by average students. This Mathcad worksheet acts as a tutor; it guides students through the essential concepts for data reduction and lets them focus on learning important spectroscopic concepts. The Mathcad worksheet is amply annotated. Students who have moderate skill with the software and have learned about regression analysis from the curve-fitting worksheets described in this column will be able to complete and understand their analysis of the IR spectrum of HCl. The three Mathcad worksheets described here stretch the physical chemistry curriculum by presenting important topics in forms that students can use with only moderate Mathcad skills. The documents facilitate learning by giving students opportunities to interact with the material in meaningful ways in addition to using the documents as sources of techniques for building their own data-reduction worksheets. However, working through these Mathcad worksheets is not a trivial task for the average student. Support needs to be provided by the instructor to ease students through more advanced mathematical and Mathcad processes. These worksheets raise the question of how much we can ask diligent students to do in one course and how much time they need to spend to master the essential concepts of that course. The Mathcad documents and associated PDF versions are available at the JCE Internet WWW site. The Mathcad documents require Mathcad version 6.0 or higher and the PDF files require Adobe Acrobat. Every effort has been made to make the documents fully compatible across the various Mathcad versions. Users may need to refer to Mathcad manuals for functions that vary with the Mathcad version number. Literature Cited 1. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences; McGraw-Hill: New York, 1969. 2. Zielinski, T. J.; Allendoerfer, R. D. J. Chem. Educ. 1997, 74, 1001. 3. Schwenz, R. W.; Polik, W. F. J. Chem. Educ. 1999, 76, 1302.

  8. Asteroid orbit fitting with radar and angular observations

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2013-12-01

    The asteroid orbit fitting problem using their radar and angular observations has been considered. The problem was solved in a standanrd way by means of minimization of weighted sum of squares of residuals. In the orbit fitting both kinds of radar observa-tions have been used: the observations of time delays and of Doppler frequency shifts. The weight for angular observations has been set the same for all of them and has been determined as inverse mean-square residual obtained in the orbit fitting using just angular observations. The weights of radar observations have been set as inverse squared errors of these observations published together with them in the Minor Planet Center electronical circulars (MPECs). For the orbit fitting some five asteroids have been taken from these circulars. The asteroids have been chosen fulfilling the requirement of more than six radar observations of them to be available. The asteroids are 1950 DA, 1999 RQ36, 2002 NY40, 2004 DC and 2005 EU2. Several orbit fittings for these aster-oids have been done: with just angular observations; with just radar observations; with both angular and radar observations. The obtained results are quite acceptable because in the last case the mean-square angular residuals are approximately equal to the same ones obtained in the fitting with just angular observations. As to radar observations mean-square residuals, the time delay residuals for three asteroids do not exceed 1 μs, for two others ˜ 10 μs and the Doppler shift residuals for three asteroids do not exceed 1 Hz, for two others ˜ 10 Hz. The motion equations included perturbations from 9 planets and the Moon using their ephemerides DE422. The numerical integration has been performed with Everhart 27-order method with variable step. All calculations have been exe-cuted to a 34-digit decimal precision (i.e. using 128-bit floating-point numbers). Further, the sizes of confidence ellipsoids of im-proved orbit parameters have been compared. It has been accepted that an indicator of ellipsoid size is a geometric mean of its six semi-axes. A comparison of sizes has shown that confidence ellipsoids obtained in orbit fitting with both angular and radar obser-vations are several times less than ellipsoids obtained with just angular observations.

  9. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  10. Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.

    PubMed

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-03-03

    Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, (1/2), is directly related to average B-factors () and (1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Determination of Ensemble-Average Pairwise Root Mean-Square Deviation from Experimental B-Factors

    PubMed Central

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-01-01

    Abstract Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, 1/2, is directly related to average B-factors () and 1/2. We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is ∼1.1 Å, under the assumption that the principal contribution to experimental B-factors is conformational variability. PMID:20197040

  12. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  13. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake: Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  14. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake - Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  15. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    USDA-ARS?s Scientific Manuscript database

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  16. Computing Robust, Bootstrap-Adjusted Fit Indices for Use with Nonnormal Data

    ERIC Educational Resources Information Center

    Walker, David A.; Smith, Thomas J.

    2017-01-01

    Nonnormality of data presents unique challenges for researchers who wish to carry out structural equation modeling. The subsequent SPSS syntax program computes bootstrap-adjusted fit indices (comparative fit index, Tucker-Lewis index, incremental fit index, and root mean square error of approximation) that adjust for nonnormality, along with the…

  17. ISOFIT - A PROGRAM FOR FITTING SORPTION ISOTHERMS TO EXPERIMENTAL DATA

    EPA Science Inventory

    Isotherm expressions are important for describing the partitioning of contaminants in environmental systems. ISOFIT (ISOtherm FItting Tool) is a software program that fits isotherm parameters to experimental data via the minimization of a weighted sum of squared error (WSSE) obje...

  18. Ultrasonic tracking of shear waves using a particle filter.

    PubMed

    Ingle, Atul N; Ma, Chi; Varghese, Tomy

    2015-11-01

    This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.

  19. A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less

  20. A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József

    2017-05-01

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.

  1. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope.

    PubMed

    Sader, John E; Yousefi, Morteza; Friend, James R

    2014-02-01

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  2. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sader, John E., E-mail: jsader@unimelb.edu.au; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noisemore » spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.« less

  3. 75 FR 57456 - Light-Walled Rectangular Pipe and Tube from the People's Republic of China: Final Results of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-21

    ...'') U.S. affiliated importer FitMAX Inc. (``FitMAX'') on June 2, 2010 and June 16, 2010. FitMAX... carbon- quality light-walled steel pipe and tube, of rectangular (including square) cross section, having...

  4. Wavefront measurements of phase plates combining a point-diffraction interferometer and a Hartmann-Shack sensor.

    PubMed

    Bueno, Juan M; Acosta, Eva; Schwarz, Christina; Artal, Pablo

    2010-01-20

    A dual setup composed of a point diffraction interferometer (PDI) and a Hartmann-Shack (HS) wavefront sensor was built to compare the estimates of wavefront aberrations provided by the two different and complementary techniques when applied to different phase plates. Results show that under the same experimental and fitting conditions both techniques provide similar information concerning the wavefront aberration map. When taking into account all Zernike terms up to 6th order, the maximum difference in root-mean-square wavefront error was 0.08 microm, and this reduced up to 0.03 microm when excluding lower-order terms. The effects of the pupil size and the order of the Zernike expansion used to reconstruct the wavefront were evaluated. The combination of the two techniques can accurately measure complicated phase profiles, combining the robustness of the HS and the higher resolution and dynamic range of the PDI.

  5. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  6. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  7. A Graphic Chi-Square Test For Two-Class Genetic Segregation Ratios

    Treesearch

    A.E. Squillace; D.J. Squillace

    1970-01-01

    A chart is presented for testing the goodness of fit of observed two-class genetic segregation ratios against hypothetical ratios, eliminating the need of computing chi-square. Although designed mainly for genetic studies, the chart can also be used for other types of studies involving two-class chi-square tests.

  8. 46 CFR 193.15-15 - Piping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...

  9. 46 CFR 193.15-15 - Piping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...

  10. 46 CFR 193.15-15 - Piping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fittings shall have a bursting pressure of not less than 6,000 pounds per square inch. (b) All piping, in...,800 pounds per square inch shall be installed in the distribution manifold or such other location as... stop valves in the manifold shall be subjected to a pressure of 1,000 pounds per square inch. With no...

  11. An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.

    2013-01-01

    Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…

  12. The role of critical ethnic awareness and social support in the discrimination-depression relationship among Asian Americans: path analysis.

    PubMed

    Kim, Isok

    2014-01-01

    This study used a path analytic technique to examine associations among critical ethnic awareness, racial discrimination, social support, and depressive symptoms. Using a convenience sample from online survey of Asian American adults (N = 405), the study tested 2 main hypotheses: First, based on the empowerment theory, critical ethnic awareness would be positively associated with racial discrimination experience; and second, based on the social support deterioration model, social support would partially mediate the relationship between racial discrimination and depressive symptoms. The result of the path analysis model showed that the proposed path model was a good fit based on global fit indices, χ²(2) = 4.70, p = .10; root mean square error of approximation = 0.06; comparative fit index = 0.97; Tucker-Lewis index = 0.92; and standardized root mean square residual = 0.03. The examinations of study hypotheses demonstrated that critical ethnic awareness was directly associated (b = .11, p < .05) with the racial discrimination experience, whereas social support had a significant indirect effect (b = .48; bias-corrected 95% confidence interval [0.02, 1.26]) between the racial discrimination experience and depressive symptoms. The proposed path model illustrated that both critical ethnic awareness and social support are important mechanisms for explaining the relationship between racial discrimination and depressive symptoms among this sample of Asian Americans. This study highlights the usefulness of the critical ethnic awareness concept as a way to better understand how Asian Americans might perceive and recognize racial discrimination experiences in relation to its mental health consequences.

  13. Interlaboratory comparison measurements of aspheres

    NASA Astrophysics Data System (ADS)

    Schachtschneider, R.; Fortmeier, I.; Stavridis, M.; Asfour, J.; Berger, G.; Bergmann, R. B.; Beutler, A.; Blümel, T.; Klawitter, H.; Kubo, K.; Liebl, J.; Löffler, F.; Meeß, R.; Pruss, C.; Ramm, D.; Sandner, M.; Schneider, G.; Wendel, M.; Widdershoven, I.; Schulz, M.; Elster, C.

    2018-05-01

    The need for high-quality aspheres is rapidly growing, necessitating increased accuracy in their measurement. A reliable uncertainty assessment of asphere form measurement techniques is difficult due to their complexity. In order to explore the accuracy of current asphere form measurement techniques, an interlaboratory comparison was carried out in which four aspheres were measured by eight laboratories using tactile measurements, optical point measurements, and optical areal measurements. Altogether, 12 different devices were employed. The measurement results were analysed after subtracting the design topography and subsequently a best-fit sphere from the measurements. The surface reduced in this way was compared to a reference topography that was obtained by taking the pointwise median across the ensemble of reduced topographies on a 1000 × 1000 Cartesian grid. The deviations of the reduced topographies from the reference topography were analysed in terms of several characteristics including peak-to-valley and root-mean-square deviations. Root-mean-square deviations of the reduced topographies from the reference topographies were found to be on the order of some tens of nanometres up to 89 nm, with most of the deviations being smaller than 20 nm. Our results give an indication of the accuracy that can currently be expected in form measurements of aspheres.

  14. Open loop model for WDM links

    NASA Astrophysics Data System (ADS)

    D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.

    2014-10-01

    Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.

  15. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  16. Non-overlap subaperture interferometric testing for large optics

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian

    2017-08-01

    It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).

  17. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  18. [A preliminary study on the forming quality of titanium alloy removable partial denture frameworks fabricated by selective laser melting].

    PubMed

    Liu, Y F; Yu, H; Wang, W N; Gao, B

    2017-06-09

    Objective: To evaluate the processing accuracy, internal quality and suitability of the titanium alloy frameworks of removable partial denture (RPD) fabricated by selective laser melting (SLM) technique, and to provide reference for clinical application. Methods: The plaster model of one clinical patient was used as the working model, and was scanned and reconstructed into a digital working model. A RPD framework was designed on it. Then, eight corresponding RPD frameworks were fabricated using SLM technique. Three-dimensional (3D) optical scanner was used to scan and obtain the 3D data of the frameworks and the data was compared with the original computer aided design (CAD) model to evaluate their processing precision. The traditional casting pure titanium frameworks was used as the control group, and the internal quality was analyzed by X-ray examination. Finally, the fitness of the frameworks was examined on the plaster model. Results: The overall average deviation of the titanium alloy RPD framework fabricated by SLM technology was (0.089±0.076) mm, the root mean square error was 0.103 mm. No visible pores, cracks and other internal defects was detected in the frameworks. The framework fits on the plaster model completely, and its tissue surface fitted on the plaster model well. There was no obvious movement. Conclusions: The titanium alloy RPD framework fabricated by SLM technology is of good quality.

  19. Combined ellipsometry and refractometry technique for characterisation of liquid crystal based nanocomposites.

    PubMed

    Warenghem, Marc; Henninot, Jean François; Blach, Jean François; Buchnev, Oleksandr; Kaczmarek, Malgosia; Stchakovsky, Michel

    2012-03-01

    Spectroscopic ellipsometry is a technique especially well suited to measure the effective optical properties of a composite material. However, as the sample is optically thick and anisotropic, this technique loses its accuracy for two reasons: anisotropy means that two parameters have to be determined (ordinary and extraordinary indices) and optically thick means a large order of interference. In that case, several dielectric functions can emerge out of the fitting procedure with a similar mean square error and no criterion to discriminate the right solution. In this paper, we develop a methodology to overcome that drawback. It combines ellipsometry with refractometry. The same sample is used in a total internal reflection (TIR) setup and in a spectroscopic ellipsometer. The number of parameters to be determined by the fitting procedure is reduced in analysing two spectra, the correct final solution is found by using the TIR results both as initial values for the parameters and as check for the final dielectric function. A prefitting routine is developed to enter the right initial values in the fitting procedure and so to approach the right solution. As an example, this methodology is used to analyse the optical properties of BaTiO(3) nanoparticles embedded in a nematic liquid crystal. Such a methodology can also be used to analyse experimentally the validity of the mixing laws, since ellipsometry gives the effective dielectric function and thus, can be compared to the dielectric function of the components of the mixture, as it is shown on the example of BaTiO(3)/nematic composite.

  20. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  1. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  2. Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models

    NASA Astrophysics Data System (ADS)

    Van Houtte, Chris; Denolle, Marine

    2018-04-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.

  3. Description of gas/particle sorption kinetics with an intraparticle diffusion model: Desorption experiments

    USGS Publications Warehouse

    Rounds, S.A.; Tiffany, B.A.; Pankow, J.F.

    1993-01-01

    Aerosol particles from a highway tunnel were collected on a Teflon membrane filter (TMF) using standard techniques. Sorbed organic compounds were then desorbed for 28 days by passing clean nitrogen through the filter. Volatile n-alkanes and polycyclic aromatic hydrocarbons (PAHs) were liberated from the filter quickly; only a small fraction of the less volatile ra-alkanes and PAHs were desorbed. A nonlinear least-squares method was used to fit an intraparticle diffusion model to the experimental data. Two fitting parameters were used: the gas/particle partition coefficient (Kp and an effective intraparticle diffusion coefficient (Oeff). Optimized values of Kp are in agreement with previously reported values. The slope of a correlation between the fitted values of Deff and Kp agrees well with theory, but the absolute values of Deff are a factor of ???106 smaller than predicted for sorption-retarded, gaseous diffusion. Slow transport through an organic or solid phase within the particles or preferential flow through the bed of particulate matter on the filter might be the cause of these very small effective diffusion coefficients. ?? 1993 American Chemical Society.

  4. Improved cutback method measuring beat-length for high-birefringence optical fiber by fitting data of photoelectric signal

    NASA Astrophysics Data System (ADS)

    Shi, Zhi-Dong; Lin, Jian-Qiang; Bao, Huan-Huan; Liu, Shu; Xiang, Xue-Nong

    2008-03-01

    A photoelectric measurement system for measuring the beat length of birefringence fiber is set up including a set of rotating-wave-plate polarimeter using single photodiode. And two improved cutback methods suitable for measuring beat-length within millimeter range of high birefringence fiber are proposed through data processing technique. The cut length needs not to be restricted shorter than one centimeter so that the auto-cleaving machine is freely used, and no need to carefully operate the manually cleaving blade with low efficiency and poor success. The first method adopts the parameter-fitting to a saw-tooth function of tried beat length by the criterion of minimum square deviations, without special limitation on the cut length. The second method adopts linear-fitting in the divided length ranges, only restrict condition is the increment between different cut lengths less than one beat-length. For a section of holey high-birefringence fiber, we do experiments respectively by the two methods. The detecting error of beat-length is discussed and the advantage is compared.

  5. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  6. [Analysis of the impact of job characteristics and organizational support for workplace violence].

    PubMed

    Li, M L; Chen, P; Zeng, F H; Cui, Q L; Zeng, J; Zhao, X S; Li, Z N

    2017-12-20

    Objective: To analyze the effect of job characteristics and organizational support for workplace violence, explore the influence path and the theoretical model, and provide a theoretical basis for reducing workplace violence. Methods: Stratified random sampling was used to select 813 medical staff, conductors and bus drivers in Chongqing with a self-made questionnaire to investigate job characteristics, organization attitude toward workplace violence, workplace violence, fear of violence, workplace violence, etc from February to October, 2014. Amos 21.0 was used to analyze the path and to establish a theoretical model of workplace violence. Results: The odds ratio of work characteristics and organizational attitude to workplace violence were 6.033 and 0.669, respectively, and the path coefficients were 0.41 and-0.14, respectively ( P <0.05). The Fitting indexes of the model: Chi-square (χ(2)) =67.835, The ratio of the chi-square to the degree of freedom (χ(2)/df) =5.112, Good-of-fit index (GFI) =0.970, Adjusted good-of-fit index (AGFI) =0.945, Normed fit index (NFI) =0.923, Root mean square error of approximation (RMSEA) =0.071, Fit criterion (Fmin) =0.092, so the model fit well with the data. Conclusion: The job characteristic is a risk factor for workplace violence while organizational attitude is a protective factor for workplace violence, so changing the job characteristics and improving the enthusiasm of the organization to deal with workplace violence are conducive to reduce workplace violence and increase loyalty to the unit.

  7. Improvement of Raman lidar algorithm for quantifying aerosol extinction

    NASA Technical Reports Server (NTRS)

    Russo, Felicita; Whiteman, David; Demoz, Belay; Hoff, Raymond

    2005-01-01

    Aerosols are particles of different composition and origin and influence the formation of clouds which are important in atmospheric radiative balance. At the present there is high uncertainty on the effect of aerosols on climate and this is mainly due to the fact that aerosol presence in the atmosphere can be highly variable in space and time. Monitoring of the aerosols in the atmosphere is necessary to better understanding many of these uncertainties. A lidar (an instrument that uses light to detect the extent of atmospheric aerosol loading) can be particularly useful to monitor aerosols in the atmosphere since it is capable to record the scattered intensity as a function of altitude from molecules and aerosols. One lidar method (the Raman lidar) makes use of the different wavelength changes that occur when light interacts with the varying chemistry and structure of atmospheric aerosols. One quantity that is indicative of aerosol presence is the aerosol extinction which quantifies the amount of attenuation (removal of photons), due to scattering, that light undergoes when propagating in the atmosphere. It can be directly measured with a Raman lidar using the wavelength dependence of the received signal. In order to calculate aerosol extinction from Raman scattering data it is necessary to evaluate the rate of change (derivative) of a Raman signal with respect to altitude. Since derivatives are defined for continuous functions, they cannot be performed directly on the experimental data which are not continuous. The most popular technique to find the functional behavior of experimental data is the least-square fit. This procedure allows finding a polynomial function which better approximate the experimental data. The typical approach in the lidar community is to make an a priori assumption about the functional behavior of the data in order to calculate the derivative. It has been shown in previous work that the use of the chi-square technique to determine the most likely functional behavior of the data prior to actually calculating the derivative eliminates the need for making a priori assumptions. We note that the a priori choice of a model itself can lead to larger uncertainties as compared to the method that is validated here. In this manuscript, the chi-square technique that determines the most likely functional behavior is validated through numerical simulation and by application to a large body of Raman lidar measurements. In general, we show that the chi-square approach to evaluate aerosol extinction yields lower extinction uncertainty than the traditional technique. We also use the technique to study the feasibility of developing a general characterization of the extinction uncertainty that could permit the uncertainty in Raman lidar aerosol extinction measurements to be estimated accurately without the use of the chi-square technique.

  8. Correlation Between Microstructure and Optical Properties of Cu (In0.7, Ga0.3) Se2 Grown by Electrodeposition Technique

    NASA Astrophysics Data System (ADS)

    Chihi, Adel; Bessais, Brahim

    2017-01-01

    Polycrystalline thin films Cu (In0.7, Ga0.3) Se2 (CIGSe) were grown on copper foils at various cathodic potentials by using an electrodeposition technique. Scanning electron microscopy showed that the average diameter of CIGSe grains increase from 0.1 μm to 1 μm when the cathodic potential decreases. The structure and surface morphology were investigated by x-ray diffraction and atomic force microscopy (AFM) techniques. This structure study shows that the thin films were well crystallized in a chalcopyrite structure without unwanted secondary phases with a preferred orientation along (112) plane. Energy-dispersive x-ray analyses confirms the existence of CIGSe single phase on a copper substrate. AFM analysis indicated that the root mean square roughness decreases from 64.28 to 27.42 when the potential deposition increases from -0.95 V to -0.77 V. Using Raman scattering spectroscopy, the A1 optical phonon mode was observed in 173 cm-1 and two other weak peaks were detected at 214 cm-1 and 225 cm-1 associated with the B2 and E modes of the CIGSe phase. Through spectroscopy ellipsometry analysis, a three-layer optical model was exploited to derive the optical properties and layer thickness of the CIGSe film by least-squares fitting the measured variation in polarization light versus the obtained microstructure.

  9. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  10. Single-level resonance parameters fit nuclear cross-sections

    NASA Technical Reports Server (NTRS)

    Drawbaugh, D. W.; Gibson, G.; Miller, M.; Page, S. L.

    1970-01-01

    Least squares analyses of experimental differential cross-section data for the U-235 nucleus have yielded single level Breit-Wigner resonance parameters that fit, simultaneously, three nuclear cross sections of capture, fission, and total.

  11. Fitting Orbits to Jupiter's Moons with a Spreadsheet.

    ERIC Educational Resources Information Center

    Bridges, Richard

    1995-01-01

    Describes how a spreadsheet is used to fit a circular orbit model to observations of Jupiter's moons made with a small telescope. Kepler's Third Law and the inverse square law of gravity are observed. (AIM)

  12. Ultrasonic tracking of shear waves using a particle filter

    PubMed Central

    Ingle, Atul N.; Ma, Chi; Varghese, Tomy

    2015-01-01

    Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761

  13. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  14. Application of least-squares fitting of ellipse and hyperbola for two dimensional data

    NASA Astrophysics Data System (ADS)

    Lawiyuniarti, M. P.; Rahmadiantri, E.; Alamsyah, I. M.; Rachmaputri, G.

    2018-01-01

    Application of the least-square method of ellipse and hyperbola for two-dimensional data has been applied to analyze the spatial continuity of coal deposits in the mining field, by using the fitting method introduced by Fitzgibbon, Pilu, and Fisher in 1996. This method uses 4{a_0}{a_2} - a_12 = 1 as a constrain function. Meanwhile, in 1994, Gander, Golub and Strebel have introduced ellipse and hyperbola fitting methods using the singular value decomposition approach. This SVD approach can be generalized into a three-dimensional fitting. In this research we, will discuss about those two fitting methods and apply it to four data content of coal that is in the form of ash, calorific value, sulfur and thickness of seam so as to produce form of ellipse or hyperbola. In addition, we compute the error difference resulting from each method and from that calculation, we conclude that although the errors are not much different, the error of the method introduced by Fitzgibbon et al is smaller than the fitting method that introduced by Golub et al.

  15. Corrigendum to "Measurement and computations for temperature dependences of self-broadened carbon dioxide transitions in the 30012←00001 and 30013←00001 bands" [J. Quant. Spectrosc. Radiat. Transf., 111 (9) (2010) 1065-1079

    NASA Astrophysics Data System (ADS)

    Predoi-Cross, Adriana; Liu, W.; Murphy, Reba; Povey, Chad; Gamache, R.; Laraia, A.; McKellar, A. R. W.; Hurtmans, Daniel; Devi, V. M.

    2015-10-01

    The group of authors would like to make the following clarification: the retrievals of self-broadened temperature dependence coefficients were performed by the authors both using the multispectrum fit program from Ref. [14] and using the multispectrum fit program of D. Chris Benner [Benner DC, Rinsland CP, Devi VM, Smith MAH, Atkins D. A multispectrum nonlinear least-squares fitting technique. J. Quant. Spectrosc. Radiat. Transf. 1995;53:705-21.). To retrieve the room temperature self-broadening parameters, the authors have used the values in Ref. [4]. For reasons of consistency with the results published for air-broadening and air-shift temperature dependence coefficients in A. Predoi-Cross, A.R.W. McKellar, D. Chris Benner, V. Malathy Devi, R.R. Gamache, C.E. Miller, R.A. Toth, L.R. Brown, Temperature dependences for air-broadened Lorentz half width and pressure-shift coefficients in the 30013←00001 and 30012←00001 bands of CO2near 1600 μm, Canadian Journal of Physics, 87 (5) (2009) 517-535, Tables 2 and 3, and Figures 2 and 4 contain only the values retrieved using the multispectrum fit program of D. Chris Benner. We would like to thank D. Chris Benner for allowing us to use his fitting software.

  16. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  17. Direct shear mapping - a new weak lensing tool

    NASA Astrophysics Data System (ADS)

    de Burgh-Day, C. O.; Taylor, E. N.; Webster, R. L.; Hopkins, A. M.

    2015-08-01

    We have developed a new technique called direct shear mapping (DSM) to measure gravitational lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmetric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spatially resolved spectroscopic images (i.e. 3D data cubes), including not just shape information (as in traditional weak lensing measurements) but velocity information as well. Spirals and rotating ellipticals are ideal targets for this new technique. Data from any large Integral Field Unit (IFU) or radio telescope is suitable, or indeed any instrument with spatially resolved spectroscopy such as the Sydney-Australian-Astronomical Observatory Multi-Object Integral Field Spectrograph (SAMI), the Atacama Large Millimeter/submillimeter Array (ALMA), the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and the Square Kilometer Array (SKA).

  18. Advanced statistical methods for improved data analysis of NASA astrophysics missions

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1992-01-01

    The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.

  19. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  20. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  1. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  2. Determination of the Interaction Position of Gamma Photons in Monolithic Scintillators Using Neural Network Fitting

    NASA Astrophysics Data System (ADS)

    Conde, P.; Iborra, A.; González, A. J.; Hernández, L.; Bellido, P.; Moliner, L.; Rigla, J. P.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Seimetz, M.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2016-02-01

    In Positron Emission Tomography (PET) detectors based on monolithic scintillators, the photon interaction position needs to be estimated from the light distribution (LD) on the photodetector pixels. Due to the finite size of the scintillator volume, the symmetry of the LD is truncated everywhere except for the crystal center. This effect produces a poor estimation of the interaction positions towards the edges, an especially critical situation when linear algorithms, such as Center of Gravity (CoG), are used. When all the crystal faces are painted black, except the one in contact with the photodetector, the LD can be assumed to behave as the inverse square law, providing a simple theoretical model. Using this LD model, the interaction coordinates can be determined by means of fitting each event to a theoretical distribution. In that sense, the use of neural networks (NNs) has been shown to be an effective alternative to more traditional fitting techniques as nonlinear least squares (LS). The multilayer perceptron is one type of NN which can model non-linear functions well and can be trained to accurately generalize when presented with new data. In this work we have shown the capability of NNs to approximate the LD and provide the interaction coordinates of γ-photons with two different photodetector setups. One experimental setup was based on analog Silicon Photomultipliers (SiPMs) and a charge division diode network, whereas the second setup was based on digital SiPMs (dSiPMs). In both experiments NNs minimized border effects. Average spatial resolutions of 1.9 ±0.2 mm and 1.7 ±0.2 mm for the entire crystal surface were obtained for the analog and dSiPMs approaches, respectively.

  3. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  4. Sci—Fri PM: Topics — 06: The influence of regional dose sensitivity on salivary loss and recovery in the parotid gland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.

    Purpose: The Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC 2010) survey of radiation dose-volume effects on salivary gland function has called for improved understanding of intragland dose sensitivity and the effectiveness of partial sparing in salivary glands. Regional dose susceptibility of sagittally- and coronally-sub-segmented parotid gland has been studied. Specifically, we examine whether individual consideration of sub-segments leads to improved prediction of xerostomia compared with whole parotid mean dose. Methods: Data from 102 patients treated for head-and-neck cancers at the BC Cancer Agency were used in this study. Whole mouth stimulated saliva was collected before (baseline), threemore » months, and one year after cessation of radiotherapy. Organ volumes were contoured using treatment planning CT images and sub-segmented into regional portions. Both non-parametric (local regression) and parametric (mean dose exponential fitting) methods were employed. A bootstrap technique was used for reliability estimation and cross-comparison. Results: Salivary loss is described well using non-parametric and mean dose models. Parametric fits suggest a significant distinction in dose response between medial-lateral and anterior-posterior aspects of the parotid (p<0.01). Least-squares and least-median squares estimates differ significantly (p<0.00001), indicating fits may be skewed by noise or outliers. Salivary recovery exhibits a weakly arched dose response: the highest recovery is seen at intermediate doses. Conclusions: Salivary function loss is strongly dose dependent. In contrast no useful dose dependence was observed for function recovery. Regional dose dependence was observed, but may have resulted from a bias in dose distributions.« less

  5. Comparability of item quality indices from sparse data matrices with random and non-random missing data patterns.

    PubMed

    Wolfe, Edward W; McGill, Michael T

    2011-01-01

    This article summarizes a simulation study of the performance of five item quality indicators (the weighted and unweighted versions of the mean square and standardized mean square fit indices and the point-measure correlation) under conditions of relatively high and low amounts of missing data under both random and conditional patterns of missing data for testing contexts such as those encountered in operational administrations of a computerized adaptive certification or licensure examination. The results suggest that weighted fit indices, particularly the standardized mean square index, and the point-measure correlation provide the most consistent information between random and conditional missing data patterns and that these indices perform more comparably for items near the passing score than for items with extreme difficulty values.

  6. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    ERIC Educational Resources Information Center

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  7. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  8. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  9. Faraday rotation data analysis with least-squares elliptical fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Adam D.; McHale, G. Brent; Goerz, David A.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less

  10. Vastly accelerated linear least-squares fitting with numerical optimization for dual-input delay-compensated quantitative liver perfusion mapping.

    PubMed

    Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal

    2018-04-01

    To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  12. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  13. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  14. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  15. Estimating GATE rainfall with geosynchronous satellite images

    NASA Technical Reports Server (NTRS)

    Stout, J. E.; Martin, D. W.; Sikdar, D. N.

    1979-01-01

    A method of estimating GATE rainfall from either visible or infrared images of geosynchronous satellites is described. Rain is estimated from cumulonimbus cloud area by the equation R = a sub 0 A + a sub 1 dA/dt, where R is volumetric rainfall, A cloud area, t time, and a sub 0 and a sub 1 are constants. Rainfall, calculated from 5.3 cm ship radar, and cloud area are measured from clouds in the tropical North Atlantic. The constants a sub 0 and a sub 1 are fit to these measurements by the least-squares method. Hourly estimates by the infrared version of this technique correlate well (correlation coefficient of 0.84) with rain totals derived from composited radar for an area of 100,000 sq km. The accuracy of this method is described and compared to that of another technique using geosynchronous satellite images. It is concluded that this technique provides useful estimates of tropical oceanic rainfall on a convective scale.

  16. Navy Fuel Composition and Screening Tool (FCAST) v2.8

    DTIC Science & Technology

    2016-05-10

    allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for

  17. Health Promotion Behavior of Chinese International Students in Korea Including Acculturation Factors: A Structural Equation Model.

    PubMed

    Kim, Sun Jung; Yoo, Il Young

    2016-03-01

    The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.

  18. Determining a Prony Series for a Viscoelastic Material From Time Varying Strain Data

    NASA Technical Reports Server (NTRS)

    Tzikang, Chen

    2000-01-01

    In this study a method of determining the coefficients in a Prony series representation of a viscoelastic modulus from rate dependent data is presented. Load versus time test data for a sequence of different rate loading segments is least-squares fitted to a Prony series hereditary integral model of the material tested. A nonlinear least squares regression algorithm is employed. The measured data includes ramp loading, relaxation, and unloading stress-strain data. The resulting Prony series which captures strain rate loading and unloading effects, produces an excellent fit to the complex loading sequence.

  19. Comparative assessment of pressure field reconstructions from particle image velocimetry measurements and Lagrangian particle tracking

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.

    2017-04-01

    A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.

  20. A Multilevel Shape Fit Analysis of Neutron Transmission Data

    NASA Astrophysics Data System (ADS)

    Naguib, K.; Sallam, O. H.; Adib, M.; Ashry, A.

    A multilevel shape fit analysis of neutron transmission data is presented. A multilevel computer code SHAPE is used to analyse clean transmission data obtained from time-of-flight (TOF) measurements. The shape analysis deduces the parameters of the observed resonances in the energy region considered in the measurements. The shape code is based upon a least square fit of a multilevel Briet-Wigner formula and includes both instrumental resolution and Doppler broadenings. Operating the SHAPE code on a test example of a measured transmission data of 151Eu, 153Eu and natural Eu in the energy range 0.025-1 eV accquired a good result for the used technique of analysis.Translated AbstractAnalyse von Neutronentransmissionsdaten mittels einer VielniveauformanpassungNeutronentransmissionsdaten werden in einer Vielniveauformanpassung analysiert. Dazu werden bereinigte Daten aus Flugzeitmessungen mit dem Rechnerprogramm SHAPE bearbeitet. Man erhält die Parameter der beobachteten Resonanzen im gemessenen Energiebereich. Die Formanpassung benutzt eine Briet-Wignerformel und berücksichtigt Linienverbreiterungen infolge sowohl der Meßeinrichtung als auch des Dopplereffekts. Als praktisches Beispiel werden 151Eu, 153Eu und natürliches Eu im Energiebereich 0.025 bis 1 eV mit guter Übereinstimmung theoretischer und experimenteller Werte behandelt.

  1. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Multistep modeling of protein structure: application towards refinement of tyr-tRNA synthetase

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Shibata, M.; Roychoudhury, M.; Rein, R.

    1987-01-01

    The scope of multistep modeling (MSM) is expanding by adding a least-squares minimization step in the procedure to fit backbone reconstruction consistent with a set of C-alpha coordinates. The analytical solution of Phi and Psi angles, that fits a C-alpha x-ray coordinate is used for tyr-tRNA synthetase. Phi and Psi angles for the region where the above mentioned method fails, are obtained by minimizing the difference in C-alpha distances between the computed model and the crystal structure in a least-squares sense. We present a stepwise application of this part of MSM to the determination of the complete backbone geometry of the 321 N terminal residues of tyrosine tRNA synthetase to a root mean square deviation of 0.47 angstroms from the crystallographic C-alpha coordinates.

  3. Analysis of the Magnitude and Frequency of Peak Discharges for the Navajo Nation in Arizona, Utah, Colorado, and New Mexico

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2006-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.

  4. Reactive decontamination of absorbing thin film polymer coatings: model development and parameter determination

    NASA Astrophysics Data System (ADS)

    Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew

    2014-03-01

    A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.

  5. Investigation on phase transitions of 1-decylammonium hydrochloride as the potential thermal energy storage material

    NASA Astrophysics Data System (ADS)

    Dan, Wen-Yan; Di, You-Ying; He, Dong-Hua; Liu, Yu-Pu

    2011-02-01

    1-Decylammonium hydrochloride was synthesized by the method of liquid phase synthesis. Chemical analysis, elemental analysis, and X-ray single crystal diffraction techniques were applied to characterize its composition and structure. Low-temperature heat capacities of the compounds were measured with a precision automated adiabatic calorimeter over the temperature range from 78 to 380 K. Three solid-solid phase transitions have been observed at the peak temperatures of 307.52 ± 0.13, 325.02 ± 0.19, and 327.26 ± 0.07 K. The molar enthalpies and entropies of three phase transitions were determined based on the analysis of heat capacity curves. Experimental molar heat capacities were fitted to two polynomial equations of the heat capacities as a function of temperature by least square method. Smoothed heat capacities and thermodynamic functions of the compound relative to the standard reference temperature 298.15 K were calculated and tabulated at intervals of 5 K based on the fitted polynomials.

  6. A comparison of linear respiratory system models based on parameter estimates from PRN forced oscillation data.

    PubMed

    Diong, B; Grainger, J; Goldman, M; Nazeran, H

    2009-01-01

    The forced oscillation technique offers some advantages over spirometry for assessing pulmonary function. It requires only passive patient cooperation; it also provides data in a form, frequency-dependent impedance, which is very amenable to engineering analysis. In particular, the data can be used to obtain parameter estimates for electric circuit-based models of the respiratory system, which can in turn aid the detection and diagnosis of various diseases/pathologies. In this study, we compare the least-squares error performance of the RIC, extended RIC, augmented RIC, augmented RIC+I(p), DuBois, Nagels and Mead models in fitting 3 sets of impedance data. These data were obtained by pseudorandom noise forced oscillation of healthy subjects, mild asthmatics and more severe asthmatics. We found that the aRIC+I(p) and DuBois models yielded the lowest fitting errors (for the healthy subjects group and the 2 asthmatic patient groups, respectively) without also producing unphysiologically large component estimates.

  7. Design of a dual band metamaterial absorber for Wi-Fi bands

    NASA Astrophysics Data System (ADS)

    Alkurt, Fatih Özkan; Baǧmancı, Mehmet; Karaaslan, Muharrem; Bakır, Mehmet; Altıntaş, Olcay; Karadaǧ, Faruk; Akgöl, Oǧuzhan; Ünal, Emin

    2018-02-01

    The goal of this work is to design and fabrication of a dual band metamaterial based absorber for Wireless Fidelity (Wi-Fi) bands. Wi-Fi has two different operating frequencies such as 2.45 GHz and 5 GHz. A dual band absorber is proposed and the proposed structure consists of two layered unit cells, and different sized square split ring (SSR) resonators located on each layers. Copper is used for metal layer and resonator structure, FR-4 is used as substrate layer in the proposed structure. This designed dual band metamaterial absorber is used in the wireless frequency bands which has two center frequencies such as 2.45 GHz and 5 GHz. Finite Integration Technique (FIT) based simulation software used and according to FIT based simulation results, the absorption peak in the 2.45 GHz is about 90% and the another frequency 5 GHz has absorption peak near 99%. In addition, this proposed structure has a potential for energy harvesting applications in future works.

  8. Modified superposition: A simple time series approach to closed-loop manual controller identification

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.

    1986-01-01

    Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.

  9. Real estate value prediction using multivariate regression models

    NASA Astrophysics Data System (ADS)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  10. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  11. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  12. Polyomino Problems to Confuse Computers

    ERIC Educational Resources Information Center

    Coffin, Stewart

    2009-01-01

    Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…

  13. Quantitative analysis of Ni2+/Ni3+ in Li[NixMnyCoz]O2 cathode materials: Non-linear least-squares fitting of XPS spectra

    NASA Astrophysics Data System (ADS)

    Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng

    2018-05-01

    Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.

  14. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  15. Two-Component Fitting of Coronal-Hole and Quiet-Sun He I 1083 Spectra

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.; Malanushenko, Elena V.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We present reduction techniques and first results for detailed fitting of solar spectra obtained with the NASA/National Solar Observatory Spectromagnetograph (NASA/NSO SPM over a 2 nm bandpass centered on the He 1 1083 nm line. The observation for this analysis was a spectra-spectroheliogram obtained at the NSO/Kitt Peak Vacuum Telescope (KPVT) on 00 Apr 17 at 21:46 UT spanning an area of 512 x 900 arc-seconds; the field of view included a coronal hole near disk center as well as surrounding quiet sun. Since the He I line is very weak and blended with nearby solar and telluric lines, accurate determination of the continuum intensity as a function of wavelength is crucial. We have modified the technique of Malanushenko {\\it et al.) (1992; {\\it AA) (\\bf 259), 567) to tie regions of continuua and the wings of spectral lines which show little variation over the image to standard reference spectra such as the NSO Fourier Transform Spectrometer atlas (Wallace {\\it et al). 1993; NSO Tech Report \\#93-001). We performed detailed least-squares fits of spectra from selected areas, accounting for all the known telluric and solar absorbers in the spectral bandpass. The best physically consistent fits to the Helium lines were obtained with Gaussian profiles from two components (one ''cool'', characteristic of the upper chromosphere; one ''hot'', representing the cool transition region at 2-3 x 10$^{4)$ K). In the coronal hole, the transition-region component, shifted by 6-7 km/s to the blue, is mildly dominant, consistent with mass outflow as suggested by Dupree {\\it et all. (1996; {\\it Ap. J.}-{\\bf 467), 121). In quiet-sun spectra there is less evidence of outward flow, and the chromospheric component is more important. All our fitted spectra show a very weak unidentified absorption feature at 1082.880 nm in the red wing of the nearby Si I line.

  16. Tests of Sunspot Number Sequences: 3. Effects of Regression Procedures on the Calibration of Historic Sunspot Data

    NASA Astrophysics Data System (ADS)

    Lockwood, M.; Owens, M. J.; Barnard, L.; Usoskin, I. G.

    2016-11-01

    We use sunspot-group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups [RB] above a variable cut-off threshold of observed total whole spot area (uncorrected for foreshortening) to simulate what a lower-acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number [RA] using a variety of regression techniques. It is found that a very high correlation between RA and RB (r_{AB} > 0.98) does not prevent large errors in the intercalibration (for example sunspot-maximum values can be over 30 % too large even for such levels of r_{AB}). In generating the backbone sunspot number [R_{BB}], Svalgaard and Schatten ( Solar Phys., 2016) force regression fits to pass through the scatter-plot origin, which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot-cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile ("Q-Q") plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least-squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot-group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar-cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

  17. Understanding Scaling Relations in Fracture and Mechanical Deformation of Single Crystal and Polycrystalline Silicon by Performing Atomistic Simulations at Mesoscale

    DTIC Science & Technology

    2009-07-16

    0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation

  18. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  19. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  20. DE and NLP Based QPLS Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo

    As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.

  1. Anthropometric data error detecting and correction with a computer

    NASA Technical Reports Server (NTRS)

    Chesak, D. D.

    1981-01-01

    Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.

  2. On the Least-Squares Fitting of Slater-Type Orbitals with Gaussians: Reproduction of the STO-NG Fits Using Microsoft Excel and Maple

    ERIC Educational Resources Information Center

    Pye, Cory C.; Mercer, Colin J.

    2012-01-01

    The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)

  3. Confirmatory factor analysis of the female sexual function index.

    PubMed

    Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R

    2013-01-01

    The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.

  4. A Photometric redshift galaxy catalog from the Red-Sequence Cluster Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, Bau-Ching; /Taiwan, Natl. Central U. /Taipei, Inst. Astron. Astrophys.; Yee, H.K.C.

    2005-02-01

    The Red-Sequence Cluster Survey (RCS) provides a large and deep photometric catalog of galaxies in the z' and R{sub c} bands for 90 square degrees of sky, and supplemental V and B data have been obtained for 33.6 deg{sup 2}. They compile a photometric redshift catalog from these 4-band data by utilizing the empirical quadratic polynomial photometric redshift fitting technique in combination with CNOC2 and GOODS/HDF-N redshift data. The training set includes 4924 spectral redshifts. The resulting catalog contains more than one million galaxies with photometric redshifts < 1.5 and R{sub c} < 24, giving an rms scatter {delta}({Delta}z)

  5. Frequency-resolved Raman for transient thermal probing and thermal diffusivity measurement

    DOE PAGES

    Wang, Tianyu; Xu, Shen; Hurley, David H.; ...

    2015-12-18

    Steady state Raman has been widely used for temperature probing and thermal conductivity/conductance measurement in combination with temperature coefficient calibration. In this work, a new transient Raman thermal probing technique: frequency-resolved Raman (FR-Raman) is developed for probing the transient thermal response of materials and measuring their thermal diffusivity. The FR-Raman uses an amplitude modulated square-wave laser for simultaneous material heating and Raman excitation. The evolution profile of Raman properties: intensity, Raman wavenumber, and emission, against frequency are measured experimentally and reconstructed theoretically. They are used for fitting to determine the thermal diffusivity of the material under test. A Si cantilevermore » is used to investigate the capacity of this new technique. The cantilever’s thermal diffusivity is determined as 9.57 × 10 -5 m 2/s, 11.00 × 10 -5 m 2/s and 9.02 × 10 -5 m 2/s by fitting the Raman intensity, wavenumber and emission. The deviation from the reference value is largely attributed to thermal stress-induced material deflection and Raman drift, which could be significantly suppressed by using a higher sensitivity Raman spectrometer with lower laser energy. As a result, the FR-Raman provides a novel way for transient thermal characterization of materials with a ?m spatial resolution.« less

  6. An Empirical Assessment of REBT Models of Psychopathology and Psychological Health in the Prediction of Anxiety and Depression Symptoms.

    PubMed

    Oltean, Horea-Radu; Hyland, Philip; Vallières, Frédérique; David, Daniel Ovidiu

    2017-11-01

    This study aimed to assess the validity of two models which integrate the cognitive (satisfaction with life) and affective (symptoms of anxiety and depression) aspects of subjective well-being within the framework of rational emotive behaviour therapy (REBT) theory; specifically REBT's theory of psychopathology and theory of psychological health. 397 Irish and Northern Irish undergraduate students completed measures of rational/irrational beliefs, satisfaction with life, and anxiety/depression symptoms. Structural equation modelling techniques were used in order to test our hypothesis within a cross-sectional design. REBT's theory of psychopathology (χ2 = 373.78, d.f. = 163, p < .001; comparative fit index (CFI) = .92; Tucker Lewis index (TLI) = .91; root mean square error of approximation (RMSEA) = .06 (95% CI = .05 to .07); standardized root mean square residual (SRMR) = .07) and psychological health (χ2 = 371.89, d.f. = 181, p < .001; CFI = .93; TLI = .92; RMSEA = .05 (95% CI = .04 to .06); SRMR = .06) provided acceptable fit of the data. Moreover, the psychopathology model explained 34% of variance in levels of anxiety/depression, while the psychological health model explained 33% of variance. This study provides important findings linking the fields of clinical and positive psychology within a comprehensible framework for both researchers and clinicians. Findings are discussed in relation to the possibility of more effective interventions, incorporating and targeting not only negative outcomes, but also positive concepts within the same model.

  7. Sinusoidal voltage protocols for rapid characterisation of ion channel kinetics.

    PubMed

    Beattie, Kylie A; Hill, Adam P; Bardenet, Rémi; Cui, Yi; Vandenberg, Jamie I; Gavaghan, David J; de Boer, Teun P; Mirams, Gary R

    2018-03-24

    Ion current kinetics are commonly represented by current-voltage relationships, time constant-voltage relationships and subsequently mathematical models fitted to these. These experiments take substantial time, which means they are rarely performed in the same cell. Rather than traditional square-wave voltage clamps, we fitted a model to the current evoked by a novel sum-of-sinusoids voltage clamp that was only 8 s long. Short protocols that can be performed multiple times within a single cell will offer many new opportunities to measure how ion current kinetics are affected by changing conditions. The new model predicts the current under traditional square-wave protocols well, with better predictions of underlying currents than literature models. The current under a novel physiologically relevant series of action potential clamps is predicted extremely well. The short sinusoidal protocols allow a model to be fully fitted to individual cells, allowing us to examine cell-cell variability in current kinetics for the first time. Understanding the roles of ion currents is crucial to predict the action of pharmaceuticals and mutations in different scenarios, and thereby to guide clinical interventions in the heart, brain and other electrophysiological systems. Our ability to predict how ion currents contribute to cellular electrophysiology is in turn critically dependent on our characterisation of ion channel kinetics - the voltage-dependent rates of transition between open, closed and inactivated channel states. We present a new method for rapidly exploring and characterising ion channel kinetics, applying it to the hERG potassium channel as an example, with the aim of generating a quantitatively predictive representation of the ion current. We fitted a mathematical model to currents evoked by a novel 8 second sinusoidal voltage clamp in CHO cells overexpressing hERG1a. The model was then used to predict over 5 minutes of recordings in the same cell in response to further protocols: a series of traditional square step voltage clamps, and also a novel voltage clamp comprising a collection of physiologically relevant action potentials. We demonstrate that we can make predictive cell-specific models that outperform the use of averaged data from a number of different cells, and thereby examine which changes in gating are responsible for cell-cell variability in current kinetics. Our technique allows rapid collection of consistent and high quality data, from single cells, and produces more predictive mathematical ion channel models than traditional approaches. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  8. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    PubMed Central

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  9. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    NASA Astrophysics Data System (ADS)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  10. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  11. Nonlinear filtering properties of detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-11-01

    Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.

  12. The influence of a time-varying least squares parametric model when estimating SFOAEs evoked with swept-frequency tones

    NASA Astrophysics Data System (ADS)

    Hajicek, Joshua J.; Selesnick, Ivan W.; Henin, Simon; Talmadge, Carrick L.; Long, Glenis R.

    2018-05-01

    Stimulus frequency otoacoustic emissions (SFOAEs) were evoked and estimated using swept-frequency tones with and without the use of swept suppressor tones. SFOAEs were estimated using a least-squares fitting procedure. The estimated SFOAEs for the two paradigms (with- and without-suppression) were similar in amplitude and phase. The fitting procedure minimizes the square error between a parametric model of total ear-canal pressure (with unknown amplitudes and phases) and ear-canal pressure acquired during each paradigm. Modifying the parametric model to allow SFOAE amplitude and phase to vary over time revealed additional amplitude and phase fine structure in the without-suppressor, but not the with-suppressor paradigm. The use of a time-varying parametric model to estimate SFOAEs without-suppression may provide additional information about cochlear mechanics not available when using a with-suppressor paradigm.

  13. Optimal measurement of ice-sheet deformation from surface-marker arrays

    NASA Astrophysics Data System (ADS)

    Macayeal, D. R.

    Surface strain rate is best observed by fitting a strain-rate ellipsoid to the measured movement of a stake network or other collection of surface features, using a least squares procedure. Error of the resulting fit varies as 1/(L delta t square root of N), where L is the stake separation, delta is the time period between initial and final stake survey, and n is the number of stakes in the network. This relation suggests that if n is sufficiently high, the traditional practice of revisiting stake-network sites on successive field seasons may be replaced by a less costly single year operation. A demonstration using Ross Ice Shelf data shows that reasonably accurate measurements are obtained from 12 stakes after only 4 days of deformation. It is possible for the least squares procedure to aid airborne photogrammetric surveys because reducing the time interval between survey and re-survey permits better surface feature recognition.

  14. Respiratory mechanics by least squares fitting in mechanically ventilated patients: application on flow-limited COPD patients.

    PubMed

    Volta, Carlo A; Marangoni, Elisabetta; Alvisi, Valentina; Capuzzo, Maurizia; Ragazzi, Riccardo; Pavanelli, Lina; Alvisi, Raffaele

    2002-01-01

    Although computerized methods of analyzing respiratory system mechanics such as the least squares fitting method have been used in various patient populations, no conclusive data are available in patients with chronic obstructive pulmonary disease (COPD), probably because they may develop expiratory flow limitation (EFL). This suggests that respiratory mechanics be determined only during inspiration. Eight-bed multidisciplinary ICU of a teaching hospital. Eight non-flow-limited postvascular surgery patients and eight flow-limited COPD patients. Patients were sedated, paralyzed for diagnostic purposes, and ventilated in volume control ventilation with constant inspiratory flow rate. Data on resistance, compliance, and dynamic intrinsic positive end-expiratory pressure (PEEPi,dyn) obtained by applying the least squares fitting method during inspiration, expiration, and the overall breathing cycle were compared with those obtained by the traditional method (constant flow, end-inspiratory occlusion method). Our results indicate that (a) the presence of EFL markedly decreases the precision of resistance and compliance values measured by the LSF method, (b) the determination of respiratory variables during inspiration allows the calculation of respiratory mechanics in flow limited COPD patients, and (c) the LSF method is able to detect the presence of PEEPi,dyn if only inspiratory data are used.

  15. Computer-assisted map projection research

    USGS Publications Warehouse

    Snyder, John Parr

    1985-01-01

    Computers have opened up areas of map projection research which were previously too complicated to utilize, for example, using a least-squares fit to a very large number of points. One application has been in the efficient transfer of data between maps on different projections. While the transfer of moderate amounts of data is satisfactorily accomplished using the analytical map projection formulas, polynomials are more efficient for massive transfers. Suitable coefficients for the polynomials may be determined more easily for general cases using least squares instead of Taylor series. A second area of research is in the determination of a map projection fitting an unlabeled map, so that accurate data transfer can take place. The computer can test one projection after another, and include iteration where required. A third area is in the use of least squares to fit a map projection with optimum parameters to the region being mapped, so that distortion is minimized. This can be accomplished for standard conformal, equalarea, or other types of projections. Even less distortion can result if complex transformations of conformal projections are utilized. This bulletin describes several recent applications of these principles, as well as historical usage and background.

  16. Predicting First Traversal Times for Virions and Nanoparticles in Mucus with Slowed Diffusion

    PubMed Central

    Erickson, Austen M.; Henry, Bruce I.; Murray, John M.; Klasse, Per Johan; Angstmann, Christopher N.

    2015-01-01

    Particle-tracking experiments focusing on virions or nanoparticles in mucus have measured mean-square displacements and reported diffusion coefficients that are orders of magnitude smaller than the diffusion coefficients of such particles in water. Accurate description of this subdiffusion is important to properly estimate the likelihood of virions traversing the mucus boundary layer and infecting cells in the epithelium. However, there are several candidate models for diffusion that can fit experimental measurements of mean-square displacements. We show that these models yield very different estimates for the time taken for subdiffusive virions to traverse through a mucus layer. We explain why fits of subdiffusive mean-square displacements to standard diffusion models may be misleading. Relevant to human immunodeficiency virus infection, using computational methods for fractional subdiffusion, we show that subdiffusion in normal acidic mucus provides a more effective barrier against infection than previously thought. By contrast, the neutralization of the mucus by alkaline semen, after sexual intercourse, allows virions to cross the mucus layer and reach the epithelium in a short timeframe. The computed barrier protection from fractional subdiffusion is some orders of magnitude greater than that derived by fitting standard models of diffusion to subdiffusive data. PMID:26153713

  17. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  18. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  19. Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.

    ERIC Educational Resources Information Center

    Knol, Dirk L.; ten Berge, Jos M. F.

    1989-01-01

    An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…

  20. The Evaluation and Selection of Adequate Causal Models: A Compensatory Education Example.

    ERIC Educational Resources Information Center

    Tanaka, Jeffrey S.

    1982-01-01

    Implications of model evaluation (using traditional chi square goodness of fit statistics, incremental fit indices for covariance structure models, and latent variable coefficients of determination) on substantive conclusions are illustrated with an example examining the effects of participation in a compensatory education program on posttreatment…

  1. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  2. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  3. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  4. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  5. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  6. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  7. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  8. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https://doi.org/10.5194/angeo-28-651-2010, 2010.Zou, S., D. Ozturk, R. Varney, and A. Reimer (2017), Effects of sudden commencement on the ionosphere: PFISR observations and global MHD simulation, Geophys. Res. Lett., 44, 3047-3058, doi:10.1002/2017GL072678.

  9. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.

  10. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  11. Prediction of Baseflow Index of Catchments using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, B.; Hatfield, K.

    2017-12-01

    We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.

  12. Innovative Use of Thighplasty to Improve Prosthesis Fit and Function in a Transfemoral Amputee.

    PubMed

    Kuiken, Todd A; Fey, Nicholas P; Reissman, Timothy; Finucane, Suzanne B; Dumanian, Gregory A

    2018-01-01

    Excess residual limb fat is a common problem that can impair prosthesis control and negatively impact gait. In the general population, thighplasty and liposuction are commonly performed for cosmetic reasons but not specifically to improve function in amputees. The objective of this study was to determine if these procedures could enhance prosthesis fit and function in an overweight above-knee amputee. We evaluated the use of these techniques on a 50-year-old transfemoral amputee who was overweight. The patient underwent presurgical imaging and tests to measure her residual limb tissue distribution, socket-limb interface stiffness, residual femur orientation, lower-extremity function, and prosthesis satisfaction. A medial thighplasty procedure with circumferential liposuction was performed, during which 2,812 g (6.2 lbs.) of subcutaneous fat and skin was removed from her residual limb. Imaging was repeated 5 months postsurgery; functional assessments were repeated 9 months postsurgery. The patient demonstrated notable improvements in socket fit and in performing most functional and walking tests. Her comfortable walking speed increased 13.3%, and her scores for the Sit-to-Stand and Four Square Step tests improved over 20%. Femur alignment in her socket changed from 8.13 to 4.14 degrees, and analysis showed a marked increase in the socket-limb interface stiffness. This study demonstrates the potential of using a routine plastic surgery procedure to modify the intrinsic properties of the limb and to improve functional outcomes in overweight or obese transfemoral amputees. This technique is a potentially attractive option compared with multiple reiterations of sockets, which can be time-consuming and costly.

  13. Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique

    NASA Astrophysics Data System (ADS)

    Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang

    2017-04-01

    The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.

  14. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  15. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  16. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  17. Fitting membrane resistance along with action potential shape in cardiac myocytes improves convergence: application of a multi-objective parallel genetic algorithm.

    PubMed

    Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J

    2014-01-01

    Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.

  18. A method for cone fitting based on certain sampling strategy in CMM metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Guo, Chaopeng

    2018-04-01

    A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.

  19. Sensitivity test of derivative matrix isopotential synchronous fluorimetry and least squares fitting methods.

    PubMed

    Makkai, Géza; Buzády, Andrea; Erostyák, János

    2010-01-01

    Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.

  20. Analysis of the Magnitude and Frequency of Peak Discharge and Maximum Observed Peak Discharge in New Mexico and Surrounding Areas

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2008-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.

  1. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  2. Measuring implementation behaviour of menu guidelines in the childcare setting: confirmatory factor analysis of a theoretical domains framework questionnaire (TDFQ).

    PubMed

    Seward, Kirsty; Wolfenden, Luke; Wiggers, John; Finch, Meghan; Wyse, Rebecca; Oldmeadow, Christopher; Presseau, Justin; Clinton-McHarg, Tara; Yoong, Sze Lin

    2017-04-04

    While there are number of frameworks which focus on supporting the implementation of evidence based approaches, few psychometrically valid measures exist to assess constructs within these frameworks. This study aimed to develop and psychometrically assess a scale measuring each domain of the Theoretical Domains Framework for use in assessing the implementation of dietary guidelines within a non-health care setting (childcare services). A 75 item 14-domain Theoretical Domains Framework Questionnaire (TDFQ) was developed and administered via telephone interview to 202 centre based childcare service cooks who had a role in planning the service menu. Confirmatory factor analysis (CFA) was undertaken to assess the reliability, discriminant validity and goodness of fit of the 14-domain theoretical domain framework measure. For the CFA, five iterative processes of adjustment were undertaken where 14 items were removed, resulting in a final measure consisting of 14 domains and 61 items. For the final measure: the Chi-Square goodness of fit statistic was 3447.19; the Standardized Root Mean Square Residual (SRMR) was 0.070; the Root Mean Square Error of Approximation (RMSEA) was 0.072; and the Comparative Fit Index (CFI) had a value of 0.78. While only one of the three indices support goodness of fit of the measurement model tested, a 14-domain model with 61 items showed good discriminant validity and internally consistent items. Future research should aim to assess the psychometric properties of the developed TDFQ in other community-based settings.

  3. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: a feasibility study.

    PubMed

    Ding, Huanjun; Molloi, Sabee

    2012-08-07

    A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.

  4. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  5. A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line

    NASA Technical Reports Server (NTRS)

    Otoshi, Tom Y.

    1994-01-01

    A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.

  6. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  7. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  8. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  9. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  10. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  11. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  12. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  13. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  14. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  15. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  16. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  17. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  18. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  19. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  20. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  1. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  2. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  3. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  4. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  5. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  6. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  7. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  8. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  9. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  10. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  11. GOSSIP: SED fitting code

    NASA Astrophysics Data System (ADS)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  12. Relation between the Surface Friction of Plates and their Statistical Microgeometry

    DTIC Science & Technology

    1980-01-01

    3-6 and 𔃽-7. Calibration-- are taken for each of the Uicr~r unit exponent values and best fit li;nes by least squares fitted through each"n set of...parameter, [ = 1.de (2-43) (Clauser 1954, 1956). Data from near equilibrium flows (Coles & Hurst 1968) was plotted along with some typical non-equilibrium...too bad a fit even for the non equilibrium flows. Coles and Hurst (1968) recommended that the fit of the law of the wake to velocity profiles should be

  13. Effect of noise on defect chaos in a reaction-diffusion model.

    PubMed

    Wang, Hongli; Ouyang, Qi

    2005-06-01

    The influence of noise on defect chaos due to breakup of spiral waves through Doppler and Eckhaus instabilities is investigated numerically with a modified Fitzhugh-Nagumo model. By numerical simulations we show that the noise can drastically enhance the creation and annihilation rates of topological defects. The noise-free probability distribution function for defects in this model is found not to fit with the previously reported squared-Poisson distribution. Under the influence of noise, the distributions are flattened, and can fit with the squared-Poisson or the modified-Poisson distribution. The defect lifetime and diffusive property of defects under the influence of noise are also checked in this model.

  14. Nuclear Matter Properties with the Re-evaluated Coefficients of Liquid Drop Model

    NASA Astrophysics Data System (ADS)

    Chowdhury, P. Roy; Basu, D. N.

    2006-06-01

    The coefficients of the volume, surface, Coulomb, asymmetry and pairing energy terms of the semiempirical liquid drop model mass formula have been determined by furnishing best fit to the observed mass excesses. Slightly different sets of the weighting parameters for liquid drop model mass formula have been obtained from minimizations of \\chi 2 and mean square deviation. The most recent experimental and estimated mass excesses from Audi-Wapstra-Thibault atomic mass table have been used for the least square fitting procedure. Equation of state, nuclear incompressibility, nuclear mean free path and the most stable nuclei for corresponding atomic numbers, all are in good agreement with the experimental results.

  15. Unsteady convection in tin in a Bridgman configuration

    NASA Technical Reports Server (NTRS)

    Knuteson, David J.; Fripp, Archibald L.; Woodell, Glenn A.; Debnam, William J., Jr.; Narayanan, Ranga

    1991-01-01

    When a quiescent fluid is heated sufficiently from below, steady convection will begin. Further heating will cause oscillatory and then turbulent flow. Theoretical results predict that the frequency of oscillation will depend on the square root of the Rayleigh number in the fluid. In the current work, liquid tin was heated from below for three aspect ratios, h/R = 3.4, 5.3, and 7.0. The experimental results are curve-fit for the square-root relation and also for a linear relation. The fit of the expression is evaluated using a correlation coefficient. An estimate for the first critical Rayleigh number (onset of steady convection) is obtained for both expressions. These values are compared to previous experimental results.

  16. The Apollo 16 regolith - A petrographically-constrained chemical mixing model

    NASA Technical Reports Server (NTRS)

    Kempa, M. J.; Papike, J. J.; White, C.

    1980-01-01

    A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.

  17. Accurate formula for gaseous transmittance in the infrared.

    PubMed

    Gibson, G A; Pierluissi, J H

    1971-07-01

    By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.

  18. Inference on periodicity of circadian time series.

    PubMed

    Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A

    2013-09-01

    Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.

  19. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  20. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  1. Design of PCB search coils for AC magnetic flux density measurement

    NASA Astrophysics Data System (ADS)

    Ulvr, Michal

    2018-04-01

    This paper presents single-layer, double-layer and ten-layer planar square search coils designed for AC magnetic flux density amplitude measurement up to 1 T in the low frequency range in a 10 mm air gap. The printed-circuit-board (PCB) method was used for producing the search coils. Special attention is given to a full characterization of the PCB search coils including a comparison between the detailed analytical design method and the finite integration technique method (FIT) on the one hand, and experimental results on the other. The results show very good agreement in the resistance, inductance and search coil constant values (the area turns) and also in the frequency dependence of the search coil constant.

  2. Methods for Quantitative Interpretation of Retarding Field Analyzer Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.

    2011-03-28

    Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and onemore » can obtain best fit values for important simulation parameters with a chi-square minimization method.« less

  3. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    NASA Astrophysics Data System (ADS)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  4. Mid-IR enhanced laser ablation molecular isotopic spectrometry

    NASA Astrophysics Data System (ADS)

    Brown, Staci; Ford, Alan; Akpovo, Codjo A.; Johnson, Lewis

    2016-08-01

    A double-pulsed laser-induced breakdown spectroscopy (DP-LIBS) technique utilizing wavelengths in the mid-infrared (MIR) for the second pulse, referred to as double-pulse LAMIS (DP-LAMIS), was examined for its effect on detection limits compared to single-pulse laser ablation molecular isotopic spectrometry (LAMIS). A MIR carbon dioxide (CO2) laser pulse at 10.6 μm was employed to enhance spectral emissions from nanosecond-laser-induced plasma via mid-IR reheating and in turn, improve the determination of the relative abundance of isotopes in a sample. This technique was demonstrated on a collection of 10BO and 11BO molecular spectra created from enriched boric acid (H3BO3) isotopologues in varying concentrations. Effects on the overall ability of both LAMIS and DP-LAMIS to detect the relative abundance of boron isotopes in a starting sample were considered. Least-squares fitting to theoretical models was used to deduce plasma parameters and understand reproducibility of results. Furthermore, some optimization for conditions of the enhanced emission was achieved, along with a comparison of the overall emission intensity, plasma density, and plasma temperature generated by the two techniques.

  5. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  6. Does money matter in inflation forecasting?

    NASA Astrophysics Data System (ADS)

    Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.

    2010-11-01

    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.

  7. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  8. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    NASA Astrophysics Data System (ADS)

    Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.

    2017-06-01

    Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  9. Dielectric relaxation studies of binary mixture of β-picoline and methanol using time domain reflectometry at different temperatures

    NASA Astrophysics Data System (ADS)

    Trivedi, C. M.; Rana, V. A.; Hudge, P. G.; Kumbharkhane, A. C.

    2016-08-01

    Complex permittivity spectra of binary mixtures of varying concentrations of β-picoline and Methanol (MeOH) have been obtained using time domain reflectometry (TDR) technique over frequency range 10 MHz to 25 GHz at 283.15, 288.15, 293.15 and 298.15 K temperatures. The dielectric relaxation parameters namely static permittivity (ɛ0), high frequency limit permittivity (ɛ∞1) and the relaxation time (τ) were determined by fitting complex permittivity data to the single Debye/Cole-Davidson model. Complex nonlinear least square (CNLS) fitting procedure was carried out using LEVMW software. The excess permittivity (ɛ0E) and the excess inverse relaxation time (1/τ)E which contain information regarding molecular structure and interaction between polar-polar liquids were also determined. From the experimental data, parameters such as effective Kirkwood correlation factor (geff), Bruggeman factor (fB) and some thermo dynamical parameters have been calculated. Excess parameters were fitted to the Redlich-Kister polynomial equation. The values of static permittivity and relaxation time increase nonlinearly with increase in the mol-fraction of MeOH at all temperatures. The values of excess static permittivity (ɛ0E) and the excess inverse relaxation time (1/τ)E are negative for the studied β-picoline — MeOH system at all temperatures.

  10. The effect of wall thickness distribution on mechanical reliability and strength in unidirectional porous ceramics.

    PubMed

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J

    2016-01-01

    Macroporous ceramics exhibit an intrinsic strength variability caused by the random distribution of defects in their structure. However, the precise role of microstructural features, other than pore volume, on reliability is still unknown. Here, we analyze the applicability of the Weibull analysis to unidirectional macroporous yttria-stabilized-zirconia (YSZ) prepared by ice-templating. First, we performed crush tests on samples with controlled microstructural features with the loading direction parallel to the porosity. The compressive strength data were fitted using two different fitting techniques, ordinary least squares and Bayesian Markov Chain Monte Carlo, to evaluate whether Weibull statistics are an adequate descriptor of the strength distribution. The statistical descriptors indicated that the strength data are well described by the Weibull statistical approach, for both fitting methods used. Furthermore, we assess the effect of different microstructural features (volume, size, densification of the walls, and morphology) on Weibull modulus and strength. We found that the key microstructural parameter controlling reliability is wall thickness. In contrast, pore volume is the main parameter controlling the strength. The highest Weibull modulus ([Formula: see text]) and mean strength (198.2 MPa) were obtained for the samples with the smallest and narrowest wall thickness distribution (3.1 [Formula: see text]m) and lower pore volume (54.5%).

  11. The effect of wall thickness distribution on mechanical reliability and strength in unidirectional porous ceramics

    NASA Astrophysics Data System (ADS)

    Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J.

    2016-01-01

    Macroporous ceramics exhibit an intrinsic strength variability caused by the random distribution of defects in their structure. However, the precise role of microstructural features, other than pore volume, on reliability is still unknown. Here, we analyze the applicability of the Weibull analysis to unidirectional macroporous yttria-stabilized-zirconia (YSZ) prepared by ice-templating. First, we performed crush tests on samples with controlled microstructural features with the loading direction parallel to the porosity. The compressive strength data were fitted using two different fitting techniques, ordinary least squares and Bayesian Markov Chain Monte Carlo, to evaluate whether Weibull statistics are an adequate descriptor of the strength distribution. The statistical descriptors indicated that the strength data are well described by the Weibull statistical approach, for both fitting methods used. Furthermore, we assess the effect of different microstructural features (volume, size, densification of the walls, and morphology) on Weibull modulus and strength. We found that the key microstructural parameter controlling reliability is wall thickness. In contrast, pore volume is the main parameter controlling the strength. The highest Weibull modulus (?) and mean strength (198.2 MPa) were obtained for the samples with the smallest and narrowest wall thickness distribution (3.1 ?m) and lower pore volume (54.5%).

  12. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  13. A survey of various enhancement techniques for square rings antennas

    NASA Astrophysics Data System (ADS)

    Mumin, Abdul Rashid O.; Alias, Rozlan; Abdullah, Jiwa; Abdulhasan, Raed Abdulkareem; Ali, Jawad; Dahlan, Samsul Haimi; Awaleh, Abdisamad A.

    2017-09-01

    The square ring shape becomes a famous reconfiguration on antenna design. The researchers have been developed the square ring by different configurations. It has high efficiency and simple calculation method. The performance enhancement for an antenna is the main reason to use this setting. Furthermore, the multi-objectives for the antenna also are considered. In this paper, different studies of square ring shape are discussed. This shape is developed in five different techniques, which are the gain enhancement, dual band antenna, reconfigurable antenna, CSRR, and circularly polarization. Moreover, the validation between these configurations also demonstrates for square ring shapes. In particular, the square ring slot improved the gain by 4.3 dB, provide dual band resonance at 1.4 and 2.6 GHz while circular polarization at 1.54 GHz, and multi-mode antenna. However, square ring strip achieved an excellent band rejection on UWB antenna at 5.5 GHz. The square ring slot length is the most influential factor on the antenna performance, which refers to the free space wavelength. Finally, comparisons between these techniques are presented.

  14. Development and assessment of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-16).

    PubMed

    Goodwin, Shane W; Ferro, Mark A; Speechley, Kathy N

    2018-03-01

    The aim of this study was to develop and validate a brief version of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE). A secondary aim was to compare the results described in previously published studies using the QOLCE-55 with those obtained using the new brief version. Data come from 373 children involved in the Health-related Quality of Life in Children with Epilepsy Study, a multicenter prospective cohort study. Item response theory (IRT) methods were used to assess dimensionality and item properties and to guide the selection of items. Replication of results using the brief measure was conducted with multiple regression, multinomial regression, and latent mixture modeling techniques. IRT methods identified a bi-factor graded response model that best fits the data. Thirty-nine items were removed, resulting in a 16-item QOLCE (QOLCE-16) with an equal number of items in all 4 domains of functioning (Cognitive, Emotional, Social, and Physical). Model fit was excellent: Comparative Fit Index = 0.99; Tucker-Lewis Index = 0.99; root mean square error of approximation = 0.052 (90% confidence interval [CI] 0.041-0.064); weighted root mean square = 0.76. Results that were reported previously using the QOLCE-55 and QOLCE-76 were comparable to those generated using the QOLCE-16. The QOLCE-16 is a multidimensional measure of health-related quality of life (HRQoL) with good psychometric properties and a short-estimated completion time. It is notable that the items were calibrated using multidimensional IRT methods to create a measure that conforms to conventional definitions of HRQoL. The QOLCE-16 is an appropriate measure for both clinicians and researchers wanting to record HRQoL information in children with epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  15. Measuring Differential Delays With Sine-Squared Pulses

    NASA Technical Reports Server (NTRS)

    Hurst, Robert N.

    1994-01-01

    Technique for measuring differential delays among red, green, and blue components of video signal transmitted on different parallel channels exploits sine-squared pulses that are parts of standard test signals transmitted during vertical blanking interval of frame period. Technique does not entail expense of test-signal generator. Also applicable to nonvideo signals including sine-squared pulses.

  16. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    PubMed

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The self-transcendence scale: an investigation of the factor structure among nursing home patients.

    PubMed

    Haugan, Gørill; Rannestad, Toril; Garåsen, Helge; Hammervold, Randi; Espnes, Geir Arild

    2012-09-01

    Self-transcendence, the ability to expand personal boundaries in multiple ways, has been found to provide well-being. The purpose of this study was to examine the dimensionality of the Norwegian version of the Self-Transcendence Scale, which comprises 15 items. Reed's empirical nursing theory of self-transcendence provided the theoretical framework; self-transcendence includes an interpersonal, intrapersonal, transpersonal, and temporal dimension. Cross-sectional data were obtained from a sample of 202 cognitively intact elderly patients in 44 Norwegian nursing homes. Exploratory factor analysis revealed two and four internally consistent dimensions of self-transcendence, explaining 35.3% (two factors) and 50.7% (four factors) of the variance, respectively. Confirmatory factor analysis indicated that the hypothesized two- and four-factor models fitted better than the one-factor model (cx (2), root mean square error of approximation, standardized root mean square residual, normed fit index, nonnormed fit index, comparative fit index, goodness-of-fit index, and adjusted goodness-of-fit index). The findings indicate self-transcendence as a multifactorial construct; at present, we conclude that the two-factor model might be the most accurate and reasonable measure of self-transcendence. This research generates insights in the application of the widely used Self-Transcendence Scale by investigating its psychometric properties by applying a confirmatory factor analysis. It also generates new research-questions on the associations between self-transcendence and well-being.

  18. POOLMS: A computer program for fitting and model selection for two level factorial replication-free experiments

    NASA Technical Reports Server (NTRS)

    Amling, G. E.; Holms, A. G.

    1973-01-01

    A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.

  19. Fitting ordinary differential equations to short time course data.

    PubMed

    Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav

    2008-02-28

    Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.

  20. The third spectrum of rhenium (Re III): Analysis of the (5d5 + 5d46s)-(5d46p + 5d36s6p) transition array

    NASA Astrophysics Data System (ADS)

    Azarov, Vladimir I.; Gayasov, Robert R.

    2018-05-01

    The spectrum of rhenium was observed in the (1017-2074) Å wavelength region. The (5d5 + 5d46s)-(5d46p + 5d36s6p) transition array of two times ionized rhenium, Re III, has been investigated and 1305 spectral lines have been classified in the region. The analysis has led to the determination of the 5d5, 5d46s, 5d46p and 5d36s6p configurations. Seventy levels of the 5d5 and 5d46s configurations in the even system and 161 levels of the 5d46p and 5d36s6p configurations in the odd system have been established. The orthogonal operators technique was used to calculate the level structure and transition probabilities. The energy parameters have been determined by the least squares fit to the observed levels. Calculated transition probability and energy values, as well as LS-compositions obtained from the fitted parameters are presented.

  1. [Microplate luminometry for toxicity bioassay of chemicals on luciferase].

    PubMed

    Ge, Hui-Lin; Liu, Shu-Shen; Chen, Fu; Luo, Jin-Hui; Lü, Dai-Zhu; Su, Bing-Xia

    2013-10-01

    A new microplate luminometry for the toxicity bioassay of chemicals on firefly luciferase, was developed using the multifunctional microplate reader (SpectraMax M5) to measure the luminous intensity of luciferase. Efects of luciferase concentration, luciferin concentration, ATP concentration, pH, temperature, and reaction time on the luminescence were systematically investigated. It was found that ATP exerted a biphasic response on the luciferase luminescence and the maximum relative light units (RLU) occurred at an ATP concentration of 1.1 x 10(-4) mol x L(-1). The method was successfully employed in the toxic effect test of NaF, NaCl, KBr and NaBF4 on luciferase. Using nonlinear least square technique, the dose-response curves (DRC) of the 4 chemicals were accurately fitted with the coefficient of determination (R2) between the fitted and observed responses being greater than 0.99. The median effective concentration (EC50) of the 4 chemicals were accurately measured from the DRC models. Compared with some literatures, the bioassay is a fast easy-operate and cost-effective method with high accuracy.

  2. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl.

    PubMed

    De Beuckeleer, Liene I; Herrebout, Wouter A

    2016-02-05

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  4. HOLEGAGE 1.0 - Strain-Gauge Drilling Analysis Program

    NASA Technical Reports Server (NTRS)

    Hampton, Roy V.

    1992-01-01

    Interior stresses inferred from changes in surface strains as hole is drilled. Computes stresses using strain data from each drilled-hole depth layer. Planar stresses computed in three ways: least-squares fit for linear variation with depth, integral method to give incremental stress data for each layer, and/or linear fit to integral data. Written in FORTRAN 77.

  5. An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA

    ERIC Educational Resources Information Center

    Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo

    2012-01-01

    The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…

  6. Graph-Theoretic Representations for Proximity Matrices through Strongly-Anti-Robinson or Circular Strongly-Anti-Robinson Matrices.

    ERIC Educational Resources Information Center

    Hubert, Lawrence; Arabie, Phipps; Meulman, Jacqueline

    1998-01-01

    Introduces a method for fitting order-constrained matrices that satisfy the strongly anti-Robinson restrictions (SAR). The method permits a representation of the fitted values in a (least-squares) SAR approximating matrix as lengths of paths in a graph. The approach is illustrated with a published proximity matrix. (SLD)

  7. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  8. 46 CFR 116.435 - Doors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... boundary in which the doors are fitted; (5) Door frames must be of rigid construction and provide at least... inches) square. A self-closing hinged or pivoted steel or equivalent material cover must be fitted in the...) A door in a bulkhead required to be A-60, A-30, or A-15 Class must be of hollow steel or equivalent...

  9. 46 CFR 116.435 - Doors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... boundary in which the doors are fitted; (5) Door frames must be of rigid construction and provide at least... inches) square. A self-closing hinged or pivoted steel or equivalent material cover must be fitted in the...) A door in a bulkhead required to be A-60, A-30, or A-15 Class must be of hollow steel or equivalent...

  10. Land area change in coastal Louisiana (1932 to 2016)

    USGS Publications Warehouse

    Couvillion, Brady R.; Beck, Holly; Schoolmaster, Donald; Fischer, Michelle

    2017-07-12

    Coastal Louisiana wetlands are one of the most critically threatened environments in the United States. These wetlands are in peril because Louisiana currently experiences greater coastal wetland loss than all other States in the contiguous United States combined. The analyses of landscape change presented here have utilized historical surveys, aerial, and satellite data to quantify landscape changes from 1932 to 2016. Analyses show that coastal Louisiana has experienced a net change in land area of approximately -4,833 square kilometers (modeled estimate: -5,197 +/- 443 square kilometers) from 1932 to 2016. This net change in land area amounts to a decrease of approximately 25 percent of the 1932 land area. Previous studies have presented linear rates of change over multidecadal time periods which unintentionally suggest that wetland change occurs at a constant rate, although in many cases, wetland change rates vary with time. A penalized regression spline technique was used to determine the model that best fit the data, rather than fitting the data with linear trends. Trend analyses from model fits indicate that coastwide rates of wetland change have varied from -83.5 +/- 11.8 square kilometers per year to -28.01 +/- 16.37 square kilometers per year. To put these numbers into perspective, this equates to long-term average loss rates of approximately an American football field’s worth of coastal wetlands within 34 minutes when losses are rapid to within 100 minutes at more recent, slower rates. Of note is the slowing of the rate of wetland change since its peak in the mid- 1970s. Not only have rates of wetland loss been decreasing since that time, a further rate reduction has been observed since 2010. Possible reasons for this reduction include recovery from lows affected by the hurricanes of 2005 and 2008, the lack of major storms in the past 8 years, a possible slowing of subsidence rates, the reduction in and relocation of oil and gas extraction and infrastructure since the peak of such activities in the late 1960s, and restoration activities. In addition, many wetlands in more exposed positions in the landscape have already been lost. Most notable of the factors listed above is the lack of major storms over the past 8 years. The observed coastwide net “stability” in land area observed over the past 6–8 years does not imply that loss has ceased. Future disturbance events such as a major hurricane impact could change the trajectory of the rates. Sea-level rise is projected to increase at an exponential rate, and that would also expedite the rate of wetland loss.

  11. Further Development of Rotating Rake Mode Measurement Data Analysis

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Sutliff, Daniel L.

    2013-01-01

    The Rotating Rake mode measurement system was designed to measure acoustic duct modes generated by a fan stage. After analysis of the measured data, the mode amplitudes and phases were quantified. For low-speed fans within axisymmetric ducts, mode power levels computed from rotating rake measured data would agree with the far-field power levels on a tone by tone basis. However, this agreement required that the sound from the noise sources within the duct propagated outward from the duct exit without reflection at the exit and previous studies suggested conditions could exist where significant reflections could occur. To directly measure the modes propagating in both directions within a duct, a second rake was mounted to the rotating system with an offset in both the axial and the azimuthal directions. The rotating rake data analysis technique was extended to include the data measured by the second rake. The analysis resulted in a set of circumferential mode levels at each of the two rake microphone locations. Radial basis functions were then least-squares fit to this data to obtain the radial mode amplitudes for the modes propagating in both directions within the duct. The fit equations were also modified to allow evanescent mode amplitudes to be computed. This extension of the rotating rake data analysis technique was tested using simulated data, numerical code produced data, and preliminary in-duct measured data.

  12. Design/Analysis of the JWST ISIM Bonded Joints for Survivability at Cryogenic Temperatures

    NASA Technical Reports Server (NTRS)

    Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini,Benjamin; Young, Daniel

    1990-01-01

    A major design and analysis challenge for the JWST ISIM structure is thermal survivability of metal/composite bonded joints below the cryogenic temperature of 30K (-405 F). Current bonded joint concepts include internal invar plug fittings, external saddle titanium/invar fittings and composite gusset/clip joints all bonded to M55J/954-6 and T300/954-6 hybrid composite tubes (75mm square). Analytical experience and design work done on metal/composite bonded joints at temperatures below that of liquid nitrogen are limited and important analysis tools, material properties, and failure criteria for composites at cryogenic temperatures are sparse in the literature. Increasing this challenge is the difficulty in testing for these required tools and properties at cryogenic temperatures. To gain confidence in analyzing and designing the ISIM joints, a comprehensive joint development test program has been planned and is currently running. The test program is designed to produce required analytical tools and develop a composite failure criterion for bonded joint strengths at cryogenic temperatures. Finite element analysis is used to design simple test coupons that simulate anticipated stress states in the flight joints; subsequently the test results are used to correlate the analysis technique for the final design of the bonded joints. In this work, we present an overview of the analysis and test methodology, current results, and working joint designs based on developed techniques and properties.

  13. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  14. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  15. Testing a path-analytic mediation model of how motivational enhancement physiotherapy improves physical functioning in pain patients.

    PubMed

    Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn

    2014-12-01

    Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.

  16. Temperature dependence of Lorentz air-broadening and pressure-shift coefficients of (12)CH4 lines in the 2.3-micron spectral region

    NASA Technical Reports Server (NTRS)

    Devi, V. Malathy; Benner, D. Chris; Smith, M. A. H.; Rinsland, C. P.

    1994-01-01

    High-resolution (0.01/cm) absorption spectra of lean mixtures of CH4 in dry air were recorded with the McMath-Pierce Fourier transform spectrometer (FTS) of the National Solar Observatory on Kitt Peak at various temperatures between 24 and -61 C. The spectra have been analyzed to determine the values at room temperature of pressure-broadened widths and pressure-induced shifts of more than 740 transitions. The temperature dependence of air-broadened widths and pressure-induced shifts was deduced for approx. 370 transitions in the nu(sub 1) + nu(sub 4), nu(sub 3) + nu(sub 4), and nu(sub 2) + nu(sub 3) bands of (12)CH4 located between 4118 and 4615/cm. These results were obtained by analyzing a total of 29 spectra simultaneously using a multi-spectral non-linear least-squares fitting technique. This new technique allowed the determination of correlated spectral line parameters (e.g. intensity and broadening coefficient) better than the procedure of averaging values obtained by fitting the spectra individually. This method also provided a direct determination of the uncertainties in the retrieved parameters due to random errors. For each band analysed in this study the dependence of the various spectral line parameters upon the tetrahedral symmetry species and the rotational quantum numbers of the transitions is also presented.

  17. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE PAGES

    Vogman, G. V.; Shumlak, U.

    2011-10-13

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. As a result, these measurements are used to gain a better understanding of Z-pinch equilibria.« less

  18. Deconvolution of Stark broadened spectra for multi-point density measurements in a flow Z-pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogman, G. V.; Shumlak, U.

    2011-10-15

    Stark broadened emission spectra, once separated from other broadening effects, provide a convenient non-perturbing means of making plasma density measurements. A deconvolution technique has been developed to measure plasma densities in the ZaP flow Z-pinch experiment. The ZaP experiment uses sheared flow to mitigate MHD instabilities. The pinches exhibit Stark broadened emission spectra, which are captured at 20 locations using a multi-chord spectroscopic system. Spectra that are time- and chord-integrated are well approximated by a Voigt function. The proposed method simultaneously resolves plasma electron density and ion temperature by deconvolving the spectral Voigt profile into constituent functions: a Gaussian functionmore » associated with instrument effects and Doppler broadening by temperature; and a Lorentzian function associated with Stark broadening by electron density. The method uses analytic Fourier transforms of the constituent functions to fit the Voigt profile in the Fourier domain. The method is discussed and compared to a basic least-squares fit. The Fourier transform fitting routine requires fewer fitting parameters and shows promise in being less susceptible to instrumental noise and to contamination from neighboring spectral lines. The method is evaluated and tested using simulated lines and is applied to experimental data for the 229.69 nm C III line from multiple chords to determine plasma density and temperature across the diameter of the pinch. These measurements are used to gain a better understanding of Z-pinch equilibria.« less

  19. Development and evaluation of social cognitive measures related to adolescent physical activity.

    PubMed

    Dewar, Deborah L; Lubans, David Revalds; Morgan, Philip James; Plotnikoff, Ronald C

    2013-05-01

    This study aimed to develop and evaluate the construct validity and reliability of modernized social cognitive measures relating to physical activity behaviors in adolescents. An instrument was developed based on constructs from Bandura's Social Cognitive Theory and included the following scales: self-efficacy, situation (perceived physical environment), social support, behavioral strategies, and outcome expectations and expectancies. The questionnaire was administered in a sample of 171 adolescents (age = 13.6 ± 1.2 years, females = 61%). Confirmatory factor analysis was employed to examine model-fit for each scale using multiple indices, including chi-square index, comparative-fit index (CFI), goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA). Reliability properties were also examined (ICC and Cronbach's alpha). Each scale represented a statistically sound measure: fit indices indicated each model to be an adequate-to-exact fit to the data; internal consistency was acceptable to good (α = 0.63-0.79); rank order repeatability was strong (ICC = 0.82-0.91). Results support the validity and reliability of social cognitive scales relating to physical activity among adolescents. As such, the developed scales have utility for the identification of potential social cognitive correlates of youth physical activity, mediators of physical activity behavior changes and the testing of theoretical models based on Social Cognitive Theory.

  20. A general approach to the testing of binary solubility systems for thermodynamic consistency. Consolidated Fuel Reprocessing Program

    NASA Astrophysics Data System (ADS)

    Hamm, L. L.; Vanbrunt, V.

    1982-08-01

    The numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system where one of the components is supercritical and exists as a noncondensable gas in the pure state is considered with emphasis on the implicit Runge-Kuta and orthogonal collocation methods. Some preliminary results indicate that the implicit Runge-Kutta method is superior. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, and extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. The technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given.

  1. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  2. Three-dimensional simulation of human teeth and its application in dental education and research.

    PubMed

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.

  3. Three-dimensional simulation of human teeth and its application in dental education and research

    PubMed Central

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836

  4. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness.

    PubMed

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.

  5. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  6. Accuracy of neutron self-activation method with iodine-containing scintillators for quantifying 128I generation using decay-fitting technique

    NASA Astrophysics Data System (ADS)

    Nohtomi, Akihiro; Wakabayashi, Genichiro

    2015-11-01

    We evaluated the accuracy of a self-activation method with iodine-containing scintillators in quantifying 128I generation in an activation detector; the self-activation method was recently proposed for photo-neutron on-line measurements around X-ray radiotherapy machines. Here, we consider the accuracy of determining the initial count rate R0, observed just after termination of neutron irradiation of the activation detector. The value R0 is directly related to the amount of activity generated by incident neutrons; the detection efficiency of radiation emitted from the activity should be taken into account for such an evaluation. Decay curves of 128I activity were numerically simulated by a computer program for various conditions including different initial count rates (R0) and background rates (RB), as well as counting statistical fluctuations. The data points sampled at minute intervals and integrated over the same period were fit by a non-linear least-squares fitting routine to obtain the value R0 as a fitting parameter with an associated uncertainty. The corresponding background rate RB was simultaneously calculated in the same fitting routine. Identical data sets were also evaluated by a well-known integration algorithm used for conventional activation methods and the results were compared with those of the proposed fitting method. When we fixed RB = 500 cpm, the relative uncertainty σR0 /R0 ≤ 0.02 was achieved for R0/RB ≥ 20 with 20 data points from 1 min to 20 min following the termination of neutron irradiation used in the fitting; σR0 /R0 ≤ 0.01 was achieved for R0/RB ≥ 50 with the same data points. Reasonable relative uncertainties to evaluate initial count rates were reached by the decay-fitting method using practically realistic sampling numbers. These results clarified the theoretical limits of the fitting method. The integration method was found to be potentially vulnerable to short-term variations in background levels, especially instantaneous contaminations by spike-like noise. The fitting method easily detects and removes such spike-like noise.

  7. A high resolution spectroscopic study of the oxygen molecule. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Ritter, K. J.

    1984-01-01

    A high resolution spectrometer which incorporates a narrow line width tunable dye laser was used to make absorption profiles of 57 spectral lines in the Oxygen A-Band at pressures up to one atmosphere in pure O2. The observed line profiles are compared to the Voigt, and a collisionally narrowed, profile using a least squares fitting procedure. The collisionally narrowed profile compares more favorable to the observed profiles. Values of the line strengths and self broadening coeffiencients, determined from the least square fitting process, are presented in tabular form. It is found that the experssion by Watson are in closest agreement with the experimentally determined strengths. The self broadening coefficients are compared with the measurements of several other investigators.

  8. JASMINE -- Japan Astrometry Satellite Mission for INfrared Exploration: Data Analysis and Accuracy Assessment with a Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.

    2009-09-01

    For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.

  9. A Theorem on the Rank of a Product of Matrices with Illustration of Its Use in Goodness of Fit Testing.

    PubMed

    Satorra, Albert; Neudecker, Heinz

    2015-12-01

    This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.

  10. A new method for quantifying and modeling large scale surface water inundation dynamics and key drivers using multiple time series of Earth observation and river flow data. A case study for Australia's Murray-Darling Basin

    NASA Astrophysics Data System (ADS)

    Heimhuber, Valentin; Tulbure, Mirela G.; Broich, Mark

    2017-04-01

    Periodically inundated surface water (SW) areas such as floodplains are hotspots of biodiversity and provide a broad range of ecosystem services but have suffered alarming declines in recent history. Large scale flooding events govern the dynamics of these areas and are a critical component of the terrestrial water cycle, but their propagation through river systems and the corresponding long term SW dynamics remain poorly quantified on continental or global scales. In this research, we used an unprecedented Landsat-based time series of SW maps (1986-2011), to develop statistical inundation models and quantify the role of driver variables across the Murray-Darling Basin (MDB) (1 million square-km), which is Australia's bread basket and subject to competing demands over limited water resources. We fitted generalized additive models (GAM) between SW extent as the dependent variable and river flow data from 68 gauges, spatial time series of rainfall (P; interpolated gauge data), evapotranspiration (ET; AWRA-L land surface model) and soil moisture (SM; active passive microwave satellite remote sensing) as predictor variables. We used a fully directed and connected river network (Australian Geofabric) in combination with ancillary data, to develop a spatial modeling framework consisting of 18,521 individual modeling units. We then fitted individual models for all modeling units, which were made up of 10x10 km grid cells split into floodplain, floodplain-lake and non-floodplain areas, depending on the type of water body and its hydrologic connectivity to a gauged river. We applied the framework to quantify flood propagation times for all major river and floodplain systems across the MDB, which were in good accordance with observed travel times. After incorporating these flow lag times into the models, average goodness of fit was high across floodplains and floodplain-lake modeling units (r-squared > 0.65), which were primarily driven by river flow, and lower for non-floodplain areas (r-squared > 0.24), which were primarily driven by local rainfall. Our results indicate that local climate conditions (i.e. P, ET, SM) had more influence on SW dynamics in the northern compared to the southern MDB and were the most influential in the least regulated and most extended floodplains in the north-west. We also applied the statistical models of two floodplain areas with contrasting flooding regimes to predict SW extents of cloud-affected time steps in the Landsat time series during the large 2010 floods with high validated accuracy (r-squared > 0.97). Our findings illustrate that integrating multi-decadal time series of Earth observation data and in situ measurements with statistical modeling techniques can provide cost-effective tools for improving the management of limited SW resources and floods. The data-driven method is applicable to other large river basins and provides statistical models that can predict SW extent for cloud-affected Landsat observations or during the peak of floods and hence, allows a more detailed quantification of the dynamics of large floods compared to existing approaches. Future research will investigate the potential of image fusion techniques (i.e. ESTARFM) for improving the quantification of rapid changes in SW distribution by combining MODIS and Landsat imagery.

  11. On Insensitivity of the Chi-Square Model Test to Nonlinear Misspecification in Structural Equation Models

    ERIC Educational Resources Information Center

    Mooijaart, Ab; Satorra, Albert

    2009-01-01

    In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of…

  12. The chipping headrig: A major invention of the 20th century

    Treesearch

    P. Koch

    1973-01-01

    A square peg won't fit in a round hole but a square timber can be chipped out of a round log. It's simple, fast and efficient, with a chipping headrig. Virtually every significant southern pine sawmill uses one of these amazing machines, busily making useful most of the wood in the tree, chipping some for paper and uncovering sawtimber where before there was...

  13. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  14. Gifted Homeschooling: Our Journey with a Square Peg. A Mother's Perspective

    ERIC Educational Resources Information Center

    Olmstead, Gwen

    2015-01-01

    The author shares that her journey with gifted homeschooling was filled with folly and a slow learning curve. By sharing some of the struggles and insights she faced, the author hopes others will benefit or find solace in knowing they are not alone when their square peg children do not fit into round holes. In this article the author discusses:…

  15. 49 CFR 231.27 - Box and other house cars without roof hatches or placed in service after October 1, 1966.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...

  16. 49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...

  17. 49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...

  18. 49 CFR 231.27 - Box and other house cars without roof hatches or placed in service after October 1, 1966.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...

  19. 49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...

  20. 49 CFR 231.27 - Box and other house cars without roof hatches or placed in service after October 1, 1966.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...

  1. 49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...

  2. 49 CFR 231.27 - Box and other house cars without roof hatches or placed in service after October 1, 1966.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...

  3. 49 CFR 231.27 - Box and other house cars without roof hatches or placed in service after October 1, 1966.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... square. Square-fit taper: Nominally two (2) in twelve (12) inches (see Plate A). (vi) All chains shall be not less than nine-sixteenths (9/16) inch BBB coil chain. (vii) All handbrake rods shall be not less... coupler horn against the buffer block or end sill. (iii) Handbrake housing shall be securely fastened to...

  4. 49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...

  5. Does positivity mediate the relation of extraversion and neuroticism with subjective happiness?

    PubMed

    Lauriola, Marco; Iani, Luca

    2015-01-01

    Recent theories suggest an important role of neuroticism, extraversion, attitudes, and global positive orientations as predictors of subjective happiness. We examined whether positivity mediates the hypothesized relations in a community sample of 504 adults between the ages of 20 and 60 years old (females = 50%). A model with significant paths from neuroticism to subjective happiness, from extraversion and neuroticism to positivity, and from positivity to subjective happiness fitted the data (Satorra-Bentler scaled chi-square (38) = 105.91; Comparative Fit Index = .96; Non-Normed Fit Index = .95; Root Mean Square Error of Approximation = .060; 90% confidence interval = .046, .073). The percentage of subjective happiness variance accounted for by personality traits was only about 48%, whereas adding positivity as a mediating factor increased the explained amount of subjective happiness to 78%. The mediation model was invariant by age and gender. The results show that the effect of extraversion on happiness was fully mediated by positivity, whereas the effect of neuroticism was only partially mediated. Implications for happiness studies are also discussed.

  6. Snijders's correction of Infit and Outfit indexes with estimated ability level: an analysis with the Rasch model.

    PubMed

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2014-01-01

    The Infit mean square W and the Outfit mean square U are commonly used person fit indexes under Rasch measurement. However, they suffer from two major weaknesses. First, their asymptotic distribution is usually derived by assuming that the true ability levels are known. Second, such distributions are even not clearly stated for indexes U and W. Both issues can seriously affect the selection of an appropriate cut-score for person fit identification. Snijders (2001) proposed a general approach to correct some person fit indexes when specific ability estimators are used. The purpose of this paper is to adapt this approach to U and W indexes. First, a brief sketch of the methodology and its application to U and W is proposed. Then, the corrected indexes are compared to their classical versions through a simulation study. The suggested correction yields controlled Type I errors against both conservatism and inflation, while the power to detect specific misfitting response patterns gets significantly increased.

  7. Separation of detector non-linearity issues and multiple ionization satellites in alpha-particle PIXE

    NASA Astrophysics Data System (ADS)

    Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.

    2018-01-01

    Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.

  8. Square-lashing technique in segmental spinal instrumentation: a biomechanical study.

    PubMed

    Arlet, Vincent; Draxinger, Kevin; Beckman, Lorne; Steffen, Thomas

    2006-07-01

    Sublaminar wires have been used for many years for segmental spinal instrumentation in scoliosis surgery. More recently, stainless steel wires have been replaced by titanium cables. However, in rigid scoliotic curves, sublaminar wires or simple cables can either brake or pull out. The square-lashing technique was devised to avoid complications such as cable breakage or lamina cutout. The purpose of the study was therefore to test biomechanically the pull out and failure mode of simple sublaminar constructs versus the square-lashing technique. Individual vertebrae were subjected to pullout testing having one of two different constructs (single loop and square lashing) using either monofilament wire or multifilament cables. Four different methods of fixation were therefore tested: single wire construct, square-lashing wiring construct, single cable construct, and square-lashing cable construct. Ultimate failure load and failure mechanism were recorded. For the single wire the construct failed 12/16 times by wire breakage with an average ultimate failure load of 793 N. For the square-lashing wire the construct failed with pedicle fracture in 14/16, one bilateral lamina fracture, and one wire breakage. Ultimate failure load average was 1,239 N For the single cable the construct failed 12/16 times due to cable breakage (average force 1,162 N). 10/12 of these breakages were where the cable looped over the rod. For the square-lashing cable all of these constructs (16/16) failed by fracture of the pedicle with an average ultimate failure load of 1,388 N. The square-lashing construct had a higher pullout strength than the single loop and almost no cutting out from the lamina. The square-lashing technique with cables may therefore represent a new advance in segmental spinal instrumentation.

  9. The Factorability of Quadratics: Motivation for More Techniques

    ERIC Educational Resources Information Center

    Bosse, Michael J.; Nandakumar, N. R.

    2005-01-01

    Typically, secondary and college algebra students attempt to utilize either completing the square or the quadratic formula as techniques to solve a quadratic equation only after frustration with factoring has arisen. While both completing the square and the quadratic formula are techniques which can determine solutions for all quadratic equations,…

  10. nmrfit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-09-01

    Nmrfit reads the output from a nuclear magnetic resonance (NMR) experiment and, through a number of intuitive API calls, produces a least-squares fit of Voigt-function approximations via particle swarm optimization.

  11. An evaluation of the structural validity of the shoulder pain and disability index (SPADI) using the Rasch model.

    PubMed

    Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C

    2018-02-01

    The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.

  12. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    ERIC Educational Resources Information Center

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  13. Role of Square Flap in Post Burn Axillary Contractures.

    PubMed

    Karki, Durga; Narayan, Ravi Prakash

    2017-09-01

    Post-burn contractures are a commonly encountered problem and many techniques have been described in their treatment. Z-plasties are the commonest local flap procedure done for linear bands with adjacent healthy tissue. Our aim was to assess the use of square flap technique in axillary contractures. Ten patients with type I and II axillary contractures underwent release by the square flap technique. All cases were followed up for at least one year and analysed for range of motion and aesthetic outcome. All cases achieved full range of movement postoperatively with no recurrence during follow up period and a good cosmetic outcome. Square flap was shown to be a reliable technique for mild to moderate axillary contractures of the anterior or posterior axillary folds even when there is significant adjacent scarring of chest wall or back of types I and II.

  14. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  15. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  16. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  17. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  18. Effect of a checklist on advanced trauma life support workflow deviations during trauma resuscitations without pre-arrival notification.

    PubMed

    Kelleher, Deirdre C; Jagadeesh Chandra Bose, R P; Waterhouse, Lauren J; Carter, Elizabeth A; Burd, Randall S

    2014-03-01

    Trauma resuscitations without pre-arrival notification are often initially chaotic, which can potentially compromise patient care. We hypothesized that trauma resuscitations without pre-arrival notification are performed with more variable adherence to ATLS protocol and that implementation of a checklist would improve performance. We analyzed event logs of trauma resuscitations from two 4-month periods before (n = 222) and after (n = 215) checklist implementation. Using process mining techniques, individual resuscitations were compared with an ideal workflow model of 6 ATLS primary survey tasks performed by the bedside evaluator and given model fitness scores (range 0 to 1). Mean fitness scores and frequency of conformance (fitness = 1) were compared (using Student's t-test or chi-square test, as appropriate) for activations with and without notification both before and after checklist implementation. Multivariable linear regression, controlling for patient and resuscitation characteristics, was also performed to assess the association between pre-arrival notification and model fitness before and after checklist implementation. Fifty-five (12.6%) resuscitations lacked pre-arrival notification (23 pre-implementation and 32 post-implementation; p = 0.15). Before checklist implementation, resuscitations without notification had lower fitness (0.80 vs 0.90; p < 0.001) and conformance (26.1% vs 50.8%; p = 0.03) than those with notification. After checklist implementation, the fitness (0.80 vs 0.91; p = 0.007) and conformance (26.1% vs 59.4%; p = 0.01) improved for resuscitations without notification, but still remained lower than activations with notification. In multivariable analysis, activations without notification had lower fitness both before (b = -0.11, p < 0.001) and after checklist implementation (b = -0.04, p = 0.02). Trauma resuscitations without pre-arrival notification are associated with a decreased adherence to key components of the ATLS primary survey protocol. The addition of a checklist improves protocol adherence and reduces the effect of notification on task performance. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  19. An Improved Cryosat-2 Sea Ice Freeboard Retrieval Algorithm Through the Use of Waveform Fitting

    NASA Technical Reports Server (NTRS)

    Kurtz, Nathan T.; Galin, N.; Studinger, M.

    2014-01-01

    We develop an empirical model capable of simulating the mean echo power cross product of CryoSat-2 SAR and SAR In mode waveforms over sea ice covered regions. The model simulations are used to show the importance of variations in the radar backscatter coefficient with incidence angle and surface roughness for the retrieval of surfaceelevation of both sea ice floes and leads. The numerical model is used to fit CryoSat-2 waveforms to enable retrieval of surface elevation through the use of look-up tables and a bounded trust region Newton least squares fitting approach. The use of a model to fit returns from sea ice regions offers advantages over currently used threshold retrackingmethods which are here shown to be sensitive to the combined effect of bandwidth limited range resolution and surface roughness variations. Laxon et al. (2013) have compared ice thickness results from CryoSat-2 and IceBridge, and found good agreement, however consistent assumptions about the snow depth and density of sea ice werenot used in the comparisons. To address this issue, we directly compare ice freeboard and thickness retrievals from the waveform fitting and threshold tracker methods of CryoSat-2 to Operation IceBridge data using a consistent set of parameterizations. For three IceBridge campaign periods from March 20112013, mean differences (CryoSat-2 IceBridge) of 0.144m and 1.351m are respectively found between the freeboard and thickness retrievals using a 50 sea ice floe threshold retracker, while mean differences of 0.019m and 0.182m are found when using the waveform fitting method. This suggests the waveform fitting technique is capable of better reconciling the seaice thickness data record from laser and radar altimetry data sets through the usage of consistent physical assumptions.

  20. Factor structure and psychometric properties of the english version of the trier inventory for chronic stress (TICS-E).

    PubMed

    Petrowski, Katja; Kliem, Sören; Sadler, Michael; Meuret, Alicia E; Ritz, Thomas; Brähler, Elmar

    2018-02-06

    Demands placed on individuals in occupational and social settings, as well as imbalances in personal traits and resources, can lead to chronic stress. The Trier Inventory for Chronic Stress (TICS) measures chronic stress while incorporating domain-specific aspects, and has been found to be a highly reliable and valid research tool. The aims of the present study were to confirm the German version TICS factorial structure in an English translation of the instrument (TICS-E) and to report its psychometric properties. A random route sample of healthy participants (N = 483) aged 18-30 years completed the TICS-E. The robust maximum likelihood estimation with a mean-adjusted chi-square test statistic was applied due to the sample's significant deviation from the multivariate normal distribution. Goodness of fit, absolute model fit, and relative model fit were assessed by means of the root mean square error of approximation (RMSEA), the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). Reliability estimates (Cronbach's α and adjusted split-half reliability) ranged from .84 to .92. Item-scale correlations ranged from .50 to .85. Measures of fit showed values of .052 for RMSEA (Cl = 0.50-.054) and .067 for SRMR for absolute model fit, and values of .846 (TLI) and .855 (CFI) for relative model-fit. Factor loadings ranged from .55 to .91. The psychometric properties and factor structure of the TICS-E are comparable to the German version of the TICS. The instrument therefore meets quality standards for an adequate measurement of chronic stress.

  1. A technique for routinely updating the ITU-R database using radio occultation electron density profiles

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno

    2013-09-01

    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).

  2. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness

    PubMed Central

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726

  3. Solar wind electron densities from Viking dual-frequency radio measurements

    NASA Technical Reports Server (NTRS)

    Muhleman, D. O.; Anderson, J. D.

    1981-01-01

    Simultaneous phase coherent, two-frequency measurements of the time delay between the earth station and the Viking spacecraft have been analyzed in terms of the electron density profiles from 4 solar radii to 200 solar radii. The measurements were made during a period of solar activity minimum (1976-1977) and show a strong solar latitude effect. The data were analyzed with both a model independent, direct numerical inversion technique and with model fitting, yielding essentially the same results. It is shown that the solar wind density can be represented by two power laws near the solar equator proportional to r exp -2.7 and r exp -2.04. However, the more rapidly falling term quickly disappears at moderate latitudes (approximately 20 deg) leaving only the inverse-square behavior.

  4. Measurements of energy dependence of average number of prompt neutrons from neutron-induced fission of 242Pu from 0.5 to 10 Mev

    NASA Astrophysics Data System (ADS)

    Khokhlov, Yurii A.; Ivanin, Igor A.; In'kov, Valerii I.; Danilin, Lev D.

    1998-10-01

    The results of energy dependence measurements of the average number of prompt neutrons from neutrons-induced fission of 242Pu from 0.5 to 10 MeV are presented. The measurements were carried out with neutrons beam from uranium target of electron linac of Russian Federal Nuclear Center using time-of-flight technique on 28.5 m flight-path. The neutrons from fission were detected by a liquid scintillator detector loaded with gadolinium, events of fission—by parallel plate avalanche detector for fission fragments. Least squares fitting results give ν¯p(En)=(2.881±0.033)+(0.141±0.003)ṡEn. The work is executed on ISTC project # 471-97.

  5. Evaluating Descent and Ascent Trajectories Near Non-Spherical Bodies

    NASA Technical Reports Server (NTRS)

    Werner, Robert A.

    2010-01-01

    Spacecraft landing on small bodies pass through regions where conventional gravitation formulations using exterior spherical harmonics are inaccurate. An investigation shows that a formulation using interior solid spherical harmonics might be satisfactory. Interior spherical harmonic expansions are usable inside an imaginary, empty sphere. For this application, such a sphere could be positioned in empty space above the intended landing site and rotating with the body. When the spacecraft is inside this sphere, the interior harmonic expansion would be used instead of the conventional, exterior harmonic expansion. Coefficients can be determined by a least-squares fit to gravitation measurements synthesized from conventional formulations. Due to their unfamiliarity, recurrences for interior, as well as exterior, expansions are derived. Hotine's technique for partial derivatives of exterior spherical harmonics is extended to interior harmonics.

  6. Excess junction current of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wang, E. Y.; Legge, R. N.; Christidis, N.

    1973-01-01

    The current-voltage characteristics of n(plus)-p silicon solar cells with 0.1, 1.0, 2.0, and 10 ohm-cm p-type base materials have been examined in detail. In addition to the usual I-V measurements, we have studied the temperature dependence of the slope of the I-V curve at the origin by the lock-in technique. The excess junction current coefficient (Iq) deduced from the slope at the origin depends on the square root of the intrinsic carrier concentration. The Iq obtained from the I-V curve fitting over the entire forward bias region at various temperatures shows the same temperature dependence. This result, in addition to the presence of an aging effect, suggest that the surface channel effect is the dominant cause of the excess junction current.

  7. Determination of Diffusion Parameters of CO2 Through Microporous PTFE Using a Potentiometric Method

    NASA Astrophysics Data System (ADS)

    Tarsiche, I.; Ciurchea, D.

    Dk values at the diffusion of CO2 through microporous PTFE of 1 to 7 × 10- 7 cm2 s- 1 in the concentration range from 4 × 10- 4 to 0.22 g/l CO2 are determined using a simple, fast and reliable potentiometric method. The method is based on the least-squares fitting of the potential versus time response of a self made CO2 sensitive Severinghaus type sensor with PTFE as a gas-permeable membrane. The obtained results are in good agreement with other reported literature data, both experimental or calculated ones using molecular dynamics simulations. The proposed technique is very sensitive especially at low concentrations of gas and may be used for the study of other polymeric membranes too.

  8. How do physicians become medical experts? A test of three competing theories: distinct domains, independent influence and encapsulation models.

    PubMed

    Violato, Claudio; Gao, Hong; O'Brien, Mary Claire; Grier, David; Shen, E

    2018-05-01

    The distinction between basic sciences and clinical knowledge which has led to a theoretical debate on how medical expertise is developed has implications for medical school and lifelong medical education. This longitudinal, population based observational study was conducted to test the fit of three theories-knowledge encapsulation, independent influence, distinct domains-of the development of medical expertise employing structural equation modelling. Data were collected from 548 physicians (292 men-53.3%; 256 women-46.7%; mean age = 24.2 years on admission) who had graduated from medical school 2009-2014. They included (1) Admissions data of undergraduate grade point average and Medical College Admission Test sub-test scores, (2) Course performance data from years 1, 2, and 3 of medical school, and (3) Performance on the NBME exams (i.e., Step 1, Step 2 CK, and Step 3). Statistical fit indices (Goodness of Fit Index-GFI; standardized root mean squared residual-SRMR; root mean squared error of approximation-RSMEA) and comparative fit [Formula: see text] of three theories of cognitive development of medical expertise were used to assess model fit. There is support for the knowledge encapsulation three factor model of clinical competency (GFI = 0.973, SRMR = 0.043, RSMEA = 0.063) which had superior fit indices to both the independent influence and distinct domains theories ([Formula: see text] vs [Formula: see text] [[Formula: see text

  9. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  10. Airborne geoid mapping of land and sea areas of East Malaysia

    NASA Astrophysics Data System (ADS)

    Jamil, H.; Kadir, M.; Forsberg, R.; Olesen, A.; Isa, M. N.; Rasidi, S.; Mohamed, A.; Chihat, Z.; Nielsen, E.; Majid, F.; Talib, K.; Aman, S.

    2017-02-01

    This paper describes the development of a new geoid-based vertical datum from airborne gravity data, by the Department of Survey and Mapping Malaysia, on land and in the South China Sea out of the coast of East Malaysia region, covering an area of about 610,000 square kilometres. More than 107,000 km flight line of airborne gravity data over land and marine areas of East Malaysia has been combined to provide a seamless land-to-sea gravity field coverage; with an estimated accuracy of better than 2.0 mGal. The iMAR-IMU processed gravity anomaly data has been used during a 2014-2016 airborne survey to extend a composite gravity solution across a number of minor gaps on selected lines, using a draping technique. The geoid computations were all done with the GRAVSOFT suite of programs from DTU-Space. EGM2008 augmented with GOCE spherical harmonic model has been used to spherical harmonic degree N = 720. The gravimetric geoid first was tied at one tide-gauge (in Kota Kinabalu, KK2019) to produce a fitted geoid, my_geoid2017_fit_kk. The fitted geoid was offset from the gravimetric geoid by +0.852 m, based on the comparison at the tide-gauge benchmark KK2019. Consequently, orthometric height at the six other tide gauge stations was computed from HGPS Lev = hGPS - Nmy_geoid2017_.t_kk. Comparison of the conventional (HLev) and GPS-levelling heights (HGPS Lev) at the six tide gauge locations indicate RMS height difference of 2.6 cm. The final gravimetric geoidwas fitted to the seven tide gauge stations and is known as my_geoid2017_fit_east. The accuracy of the gravimetric geoid is estimated to be better than 5 cm across most of East Malaysia land and marine areas

  11. Phase analysis for three-dimensional surface reconstruction of apples using structured-illumination reflectance imaging

    NASA Astrophysics Data System (ADS)

    Lu, Yuzhen; Lu, Renfu

    2017-05-01

    Three-dimensional (3-D) shape information is valuable for fruit quality evaluation. This study was aimed at developing phase analysis techniques for reconstruction of the 3-D surface of fruit from the pattern images acquired by a structuredillumination reflectance imaging (SIRI) system. Phase-shifted sinusoidal patterns, distorted by the fruit geometry, were acquired and processed through phase demodulation, phase unwrapping and other post-processing procedures to obtain phase difference maps relative to the phase of a reference plane. The phase maps were then transformed into height profiles and 3-D shapes in a world coordinate system based on phase-to-height and in-plane calibrations. A reference plane-based approach, coupled with the curve fitting technique using polynomials of order 3 or higher, was utilized for phase-to-height calibrations, which achieved superior accuracies with the root-mean-squared errors (RMSEs) of 0.027- 0.033 mm for a height measurement range of 0-91 mm. The 3rd-order polynomial curve fitting technique was further tested on two reference blocks with known heights, resulting in relative errors of 3.75% and 4.16%. In-plane calibrations were performed by solving a linear system formed by a number of control points in a calibration object, which yielded a RMSE of 0.311 mm. Tests of the calibrated system for reconstructing the surface of apple samples showed that surface concavities (i.e., stem/calyx regions) could be easily discriminated from bruises from the phase difference maps, reconstructed height profiles and the 3-D shape of apples. This study has laid a foundation for using SIRI for 3-D shape measurement, and thus expanded the capability of the technique for quality evaluation of horticultural products. Further research is needed to utilize the phase analysis techniques for stem/calyx detection of apples, and optimize the phase demodulation and unwrapping algorithms for faster and more reliable detection.

  12. In vivo dosimetry with optically stimulated luminescent dosimeters for conformal and intensity-modulated radiation therapy: A 2-year multicenter cohort study.

    PubMed

    Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis

    Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  13. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    NASA Astrophysics Data System (ADS)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  14. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  15. Least-Squares Approximation of an Improper by a Proper Correlation Matrix Using a Semi-Infinite Convex Program. Research Report 87-7.

    ERIC Educational Resources Information Center

    Knol, Dirk L.; ten Berge, Jos M. F.

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…

  16. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  17. Nutritional Status of Rural Older Adults Is Linked to Physical and Emotional Health.

    PubMed

    Jung, Seung Eun; Bishop, Alex J; Kim, Minjung; Hermann, Janice; Kim, Giyeon; Lawrence, Jeannine

    2017-06-01

    Although nutritional status is influenced by multidimensional aspects encompassing physical and emotional well-being, there is limited research on this complex relationship. The purpose of this study was to examine the interplay between indicators of physical health (perceived health status and self-care capacity) and emotional well-being (depressive affect and loneliness) on rural older adults' nutritional status. The cross-sectional study was conducted from June 1, 2007, to June 1, 2008. A total of 171 community-dwelling older adults, aged 65 years and older, residing within nonmetro rural communities in the United States participated in this study. Participants completed validated instruments measuring self-care capacity, perceived health status, loneliness, depressive affect, and nutritional status. Structural equation modeling was employed to investigate the complex interplay of physical and emotional health status with nutritional status among rural older adults. The χ 2 test, comparative fit index, root mean square error of approximation, and standardized root mean square residual were used to assess model fit. The χ 2 test and the other model fit indexes showed the hypothesized structural equation model provided a good fit to the data (χ 2 (2)=2.15; P=0.34; comparative fit index=1.00; root mean square error of approximation=0.02; and standardized root mean square residual=0.03). Self-care capacity was significantly related with depressive affect (γ=-0.11; P=0.03), whereas self-care capacity was not significantly related with loneliness. Perceived health status had a significant negative relationship with both loneliness (γ=-0.16; P=0.03) and depressive affect (γ=-0.22; P=0.03). Although loneliness showed no significant direct relationship with nutritional status, it showed a significant direct relationship with depressive affect (β=.4; P<0.01). Finally, the results demonstrated that depressive affect had a significant negative relationship with nutritional status (β=-.30; P<0.01). The results indicated physical health and emotional indicators have significant multidimensional associations with nutritional status among rural older adults. The present study provides insights into the importance of addressing both physical and emotional well-being together to reduce potential effects of poor emotional well-being on nutritional status, particularly among rural older adults with impaired physical health and self-care capacity. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  18. Applied Algebra: The Modeling Technique of Least Squares

    ERIC Educational Resources Information Center

    Zelkowski, Jeremy; Mayes, Robert

    2008-01-01

    The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)

  19. Development of the Chinese version of the Hospital Autonomy Questionnaire: a cross-sectional study in Guangdong Province

    PubMed Central

    Liu, Zifeng; Yuan, Lianxiong; Huang, Yixiang; Zhang, Lingling; Luo, Futian

    2016-01-01

    Objective We aimed to develop a questionnaire for quantitative evaluation of the autonomy of public hospitals in China. Method An extensive literature review was conducted to select possible items for inclusion in the questionnaire, which was then reviewed by 5 experts. After a two-round Delphi method, we distributed the questionnaire to 404 secondary and tertiary hospitals in Guangdong Province, China, and 379 completed questionnaires were collected. The final questionnaire was then developed on the basis of the results of exploratory and confirmatory factor analysis. Results Analysis suggested that all internal consistency reliabilities exceeded the minimum reliability standard of 0.70 for the α coefficient. The overall scale coefficient was 0.87, and 6 subscale coefficients were 0.92 (strategic management), 0.81 (budget and expenditure), 0.85 (financing), 0.75 (financing, medical management), 0.86 (human resources) and 0.86 (accountability). Correlation coefficients between and among items and their hypothesised subscales were higher than those with other subscales. The value of average variance extracted (AVE) was higher than 0.5, the value of construct reliability (CR) was higher than 0.7, and the square roots of the AVE of each subscale were larger than the correlation of the specific subscale with the other subscales, supporting the convergent and discriminant validity of the Chinese version of the Hospital Autonomy Questionnaire (CVHAQ). The model fit indices were all acceptable: χ2/df=1.73, Goodness of Fit Index (GFI) = 0.93, Adjusted Goodness of Fit Index (AGFI) = 0.91, Non-Normed Fit Index (NNFI) = 0.96, Comparative Fit Index (CFI) = 0.97, Root Mean Square Error of Approximation (RMSEA) = 0.04, Standardised Root Mean Square Residual (SRMR) = 0.07. Conclusions This study demonstrated the reliability and validity of a CVHAQ and provides a quantitative method for the assessment of hospital autonomy. PMID:26911587

  20. The Effect of a History-Fitness Updating Rule on Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Du, Wen-Bo; Cao, Xian-Bin; Liu, Run-Ran; Jia, Chun-Xiao

    In this paper, we introduce a history-fitness-based updating rule into the evolutionary prisoner's dilemma game (PDG) on square lattices, and study how it works on the evolution of cooperation level. Under this updating rule, the player i will firstly select player j from its direct neighbors at random and then compare their fitness which is determined by the current payoff and history fitness. If player i's fitness is larger than that of j, player i will be more likely to keep its own strategy. Numerical results show that the cooperation level is remarkably promoted by the history-fitness-based updating rule. Moreover, there exists a moderate mixing proportion of current payoff and history fitness that can induce the optimal fitness, where the highest cooperation level is obtained. Our work may shed some new light on the ubiquitous cooperative behaviors in nature and society induced by the history factor.

  1. Eliminating the blood-flow confounding effect in intravoxel incoherent motion (IVIM) using the non-negative least square analysis in liver.

    PubMed

    Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier

    2017-01-01

    Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Systemic inflammatory markers and sources of social support among older adults in the Memory Research Unit cohort.

    PubMed

    McHugh Power, Joanna; Carney, Sile; Hannigan, Caoimhe; Brennan, Sabina; Wolfe, Hannah; Lynch, Marina; Kee, Frank; Lawlor, Brian

    2016-11-01

    Potential associations between systemic inflammation and social support received by a sample of 120 older adults were examined here. Inflammatory markers, cognitive function, social support and psychosocial wellbeing were evaluated. A structural equation modelling approach was used to analyse the data. The model was a good fit [Formula: see text], p < 0.001; comparative fit index = 0.973; Tucker-Lewis Index = 0.962; root mean square error of approximation = 0.021; standardised root mean-square residual = 0.074). Chemokine levels were associated with increased age ( β = 0.276), receipt of less social support from friends ( β = -0.256) and body mass index ( β = -0.256). Results are discussed in relation to social signal transduction theory.

  3. Development and use of Fourier self deconvolution and curve-fitting in the study of coal oxidation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, J.A.

    1986-01-01

    Techniques have been developed for modeling highly overlapped band multiplets. The method is based on a least-squares fit of spectra by a series of bands of known shape. Using synthetic spectra, it was shown that when bands are separated by less than their full width at half height (FWHH), valid analytical data can only be obtained after the width of each component band is narrowed by Fourier self deconvolution (FSD). The optimum method of spectral fitting determined from the study of synthetic spectra was then applied to the characterization of oxidized coals. A medium volatile bituminous coal which was airmore » oxidized at 200/sup 0/C for different lengths of time, was extracted with chloroform. A comparison of the infrared spectra of the whole coal and the extract indicated that the extracted material contains a smaller amount of carbonyl, ether, and ester groups, while the aromatic content is much higher. Oxidation does not significantly affect the aromatic content of the whole cola. Most of the aromatic groups in the CHCl/sub 3/ extract show evidence of reaction, however. The production of relatively large amounts of intramolecular aromatic anhydrides is seen in the spectrum of the extract of coals which have undergone extensive oxidation,while there is only a slight indication of this anhydride in the whole coal.« less

  4. Prior-knowledge Fitting of Accelerated Five-dimensional Echo Planar J-resolved Spectroscopic Imaging: Effect of Nonlinear Reconstruction on Quantitation.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Thomas, M Albert

    2017-07-24

    1 H Magnetic Resonance Spectroscopic imaging (SI) is a powerful tool capable of investigating metabolism in vivo from mul- tiple regions. However, SI techniques are time consuming, and are therefore difficult to implement clinically. By applying non-uniform sampling (NUS) and compressed sensing (CS) reconstruction, it is possible to accelerate these scans while re- taining key spectral information. One recently developed method that utilizes this type of acceleration is the five-dimensional echo planar J-resolved spectroscopic imaging (5D EP-JRESI) sequence, which is capable of obtaining two-dimensional (2D) spectra from three spatial dimensions. The prior-knowledge fitting (ProFit) algorithm is typically used to quantify 2D spectra in vivo, however the effects of NUS and CS reconstruction on the quantitation results are unknown. This study utilized a simulated brain phantom to investigate the errors introduced through the acceleration methods. Errors (normalized root mean square error >15%) were found between metabolite concentrations after twelve-fold acceleration for several low concentra- tion (<2 mM) metabolites. The Cramér Rao lower bound% (CRLB%) values, which are typically used for quality control, were not reflective of the increased quantitation error arising from acceleration. Finally, occipital white (OWM) and gray (OGM) human brain matter were quantified in vivo using the 5D EP-JRESI sequence with eight-fold acceleration.

  5. Application of least median of squared orthogonal distance (LMD) and LMD-based reweighted least squares (RLS) methods on the stock-recruitment relationship

    NASA Astrophysics Data System (ADS)

    Wang, Yan-Jun; Liu, Qun

    1999-03-01

    Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.

  6. An Item Fit Statistic Based on Pseudocounts from the Generalized Graded Unfolding Model: A Preliminary Report.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…

  7. Scanning electron microscopic and histologic evaluation of the AcrySof SA30AL acrylic intraocular lens. Manufacturing quality and morphology in the capsular bag.

    PubMed

    Escobar-Gomez, Marcela; Apple, David J; Vargas, Luis G; Werner, Liliana; Arthur, Stella N; Pandey, Suresh K; Izak, Andrea M; Schmidbauer, Josef M

    2003-01-01

    To evaluate the properties of the AcrySof(R) SA30AL (Alcon Laboratories, Inc.) single-piece foldable posterior chamber intraocular lens (IOL). Center for Research on Ocular Therapeutics and Biodevices, Storm Eye Institute, Medical University of South Carolina, Charleston, South Carolina, USA. Two nonimplanted clinical-quality AcrySof IOLs were examined by gross, light, and scanning electron microscopy (SEM). In addition, 2 eyes implanted with this IOL obtained post-mortem, the first such eyes accessioned in our laboratory and the first described to date, were examined using the Miyake-Apple posterior photographic technique and by histologic sections. Scanning electron microscopy of the SA30AL IOL showed excellent surface finish. The edge of the optic was square (truncated) and had a matte (velvet or ground-glass) appearance, a feature that may minimize edge glare and other visual phenomena. A well-fabricated square or truncated optic edge was demonstrated. Miyake-Apple analysis revealed that the SA30AL IOL showed appropriate fit and configuration within the capsular bag. Histologic correlation of the IOL's square edge and its relation to the capsular bag and adjacent Soemmering's ring were noted. The AcrySof SA30AL IOL is a well-fabricated lens that situates well in the capsular bag. The truncated optic and its relationship to adjacent structures show a morphological profile that has been shown to be highly efficacious in reducing the rate of posterior capsule opacification.

  8. A regression-kriging model for estimation of rainfall in the Laohahe basin

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Ren, Li L.; Liu, Gao H.

    2009-10-01

    This paper presents a multivariate geostatistical algorithm called regression-kriging (RK) for predicting the spatial distribution of rainfall by incorporating five topographic/geographic factors of latitude, longitude, altitude, slope and aspect. The technique is illustrated using rainfall data collected at 52 rain gauges from the Laohahe basis in northeast China during 1986-2005 . Rainfall data from 44 stations were selected for modeling and the remaining 8 stations were used for model validation. To eliminate multicollinearity, the five explanatory factors were first transformed using factor analysis with three Principal Components (PCs) extracted. The rainfall data were then fitted using step-wise regression and residuals interpolated using SK. The regression coefficients were estimated by generalized least squares (GLS), which takes the spatial heteroskedasticity between rainfall and PCs into account. Finally, the rainfall prediction based on RK was compared with that predicted from ordinary kriging (OK) and ordinary least squares (OLS) multiple regression (MR). For correlated topographic factors are taken into account, RK improves the efficiency of predictions. RK achieved a lower relative root mean square error (RMSE) (44.67%) than MR (49.23%) and OK (73.60%) and a lower bias than MR and OK (23.82 versus 30.89 and 32.15 mm) for annual rainfall. It is much more effective for the wet season than for the dry season. RK is suitable for estimation of rainfall in areas where there are no stations nearby and where topography has a major influence on rainfall.

  9. Random Initialisation of the Spectral Variables: an Alternate Approach for Initiating Multivariate Curve Resolution Alternating Least Square (MCR-ALS) Analysis.

    PubMed

    Kumar, Keshav

    2017-11-01

    Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.

  10. The mechanism of ΔT variation in coupled heat transfer and phase transformation for elastocaloric materials and its application in materials characterization

    NASA Astrophysics Data System (ADS)

    Qian, Suxin; Yuan, Lifen; Yu, Jianlin; Yan, Gang

    2017-11-01

    Elastocaloric cooling serves as a promising environmental friendly candidate with substantial energy saving potential as the next generation cooling technology for air-conditioning, refrigeration, and electronic cooling applications. The temperature change (ΔT) of elastocaloric materials is a direct measure of their elastocaloric effect, which scales proportionally with the device cooling performance based on this phenomenon. Here, the underlying physics between the measured ΔT and the adiabatic temperature span ΔTad is revealed by theoretical investigation of the simplified energy equation describing the coupled simultaneous heat transfer and phase transformation processes. The revealed relation of ΔT depends on a simple and symmetric non-linear function, which requires the introduction of an important dimensionless number Φ, defined as the ratio between convective heat transfer energy and variation of internal energy of the material. The theory was supported by more than 100 data points from the open literature for four different material compositions. Based on the theory, a data sampling and reduction technique was proposed to assist future material characterization studies. Instead of approaching ΔTad by applying an ultrafast strain rate in the old way, the proposed prediction of ΔTad is based on the non-linear least squares fitting method with the measured ΔT dataset at different strain rates within the moderate range. Numerical case studies indicated that the uncertainty associated with the proposed method is within ±1 K if the sampled data satisfied two conditions. In addition, the heat transfer coefficient can be estimated as a by-product of the least squares fitting method proposed in this study.

  11. Geostatistical interpolation of hourly precipitation from rain gauges and radar for a large-scale extreme rainfall event

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2007-01-01

    SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.

  12. Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN

    NASA Astrophysics Data System (ADS)

    Schneider, Evan; Impey, C.

    2012-01-01

    We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.

  13. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  14. Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.

    2014-04-01

    The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported for large earthquakes.

  15. Precise orbit determination using the batch filter based on particle filtering with genetic resampling approach

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul

    2014-09-01

    In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.

  16. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  17. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: A feasibility study

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941

  18. [Development of a Questionnaire Measuring Sexual Mental Health of Tibetan University Students].

    PubMed

    Chen, Jun-cheng; Yan, Yu-ruo; Ai, Li; Guo, Xue-hua; He, Jian-xiu; Yuan, Ping

    2016-05-01

    To develop a questionnaire measuring sexual mental health of Tibetan university students. A draft questionnaire was developed with reference to the Sexual Civilization Survey for University Students of New Century and other published literature, and in consultation with experts. The questionnaire was tested in 230 students. Exploratory factor analyses with principal component and varimax orthogonal rotation were performed. Common factors with a > 1 eigenvalues and ≥ 3 loaded items (factor loading ≥ 0.4) were retained. Items with a < 0.4 factor loading, < 0.2 commonality, or falling into a common factor with < 3 items were excluded. The revised questionnaire was administered in another sample of 481 university students. Cronbach's α and split-half reliabilities were estimated. Confirmatory factor analyses were performed to test the construct validity of the questionnaire. Four rounds of exploratory factor analyses reduced the draft questionnaire items from 39 to 34 with a 7-factor structure. The questionnaire had a Cronbach's α of 0.920, 0.898, 0.812, 0.844, 0.787, 0.684, 0.703, and 0.608, and a Spearman-Brown coefficient of 0.763, 0.867, 0.742, 0838, 0.746, 0.822, 0.677, and 0.564 for the overall questionnaire and its 7 domains, respectively, suggesting good internal reliability. The structural equation of confirmatory factor analysis fitted well with the raw data: fit index χ²/df 3.736; root mean square residual (RMR) 0.081; root mean square error of approximation (RMSEA = 0.076; goodness of fit index (GFI) 0.805; adjusted goodness of fit index (AGFI) 0.770; normed fit index (NFI) = 0.774; relative fit index (RFI) 0.749; incremental fit index (IFI) 0.824; non-normed fit index (NNFI) = 0.803; comparative fit index (CFI) = 0.823; parsimony goodness of fit index (PGFI) = 0.684; parsimony normed fit index (PNFI) = 0.698; parsimony comparative fit index (PCFI) = 0.742, suggesting good construct validity of the questionnaire. The Sexual Mental Health Questionnaire for Tibetan University Student has demonstrated good reliability and validity.

  19. Castine Report S-15 Project: Shipbuilding Standards

    DTIC Science & Technology

    1976-01-01

    Fixed Square Windows Ships” Extruded Aluminium Alloy Square Windows “ Ships” Foot Steps Ships* Wooden Hand Rail . Pilot Ladders Panama Canal Pilot...Platforms Aluminium Alloy Accommodation Ladders Mouth Pieces for Voice Tube Chain Drwe Type Telegraphs Fittings for Steam Whistle Llfeboats Radial Type...Cast Steel Angle Valves for Compressed Air F 8001-1957 F 8002-1967 F 8003.1975 F 8004.1975 F 8011 1966 F 8013.1969 F 8101.1969 F 8401.1970 F

  20. Evaluating the relationship between job stress and job satisfaction among female hospital nurses in Babol: An application of structural equation modeling.

    PubMed

    Bagheri Hosseinabadi, Majid; Etemadinezhad, Siavash; Khanjani, Narges; Ahmadi, Omran; Gholinia, Hemat; Galeshi, Mina; Samaei, Seyed Ehsan

    2018-01-01

    Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran. Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals. Respondents completed the Minnesota Satisfaction Questionnaire (MSQ), the health and safety executive (HSE) indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM) analyses were carried out applying SPSS v. 22 and AMOS v. 22. Results: The Normed Fit Index (NFI), Non-normed Fit Index (NNFI), Incremental Fit Index (IFI)and Comparative Fit Index (CFI) were greater than 0.9. Also, goodness of fit index (GFI=0.99)and adjusted goodness of fit index (AGFI) were greater than 0.8, and root mean square error of approximation (RMSEA) were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001), control (β =0.135, CI =0.062 - 0.404, P =0.008), relationships(β =-0.208, CI =-0.637- -0.209; P≤0.001) and changes (β =0.247, CI =0.360 - 1.026, P≤0.001)with job satisfaction. Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended.

  1. Evaluating the relationship between job stress and job satisfaction among female hospital nurses in Babol: An application of structural equation modeling

    PubMed Central

    Bagheri Hosseinabadi, Majid; Etemadinezhad, Siavash; khanjani, Narges; Ahmadi, Omran; Gholinia, Hemat; Galeshi, Mina; Samaei, Seyed Ehsan

    2018-01-01

    Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran. Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals. Respondents completed the Minnesota Satisfaction Questionnaire (MSQ), the health and safety executive (HSE) indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM) analyses were carried out applying SPSS v. 22 and AMOS v. 22. Results: The Normed Fit Index (NFI), Non-normed Fit Index (NNFI), Incremental Fit Index (IFI)and Comparative Fit Index (CFI) were greater than 0.9. Also, goodness of fit index (GFI=0.99)and adjusted goodness of fit index (AGFI) were greater than 0.8, and root mean square error of approximation (RMSEA) were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001), control (β =0.135, CI =0.062 - 0.404, P =0.008), relationships(β =-0.208, CI =-0.637– -0.209; P≤0.001) and changes (β =0.247, CI =0.360 - 1.026, P≤0.001)with job satisfaction. Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended. PMID:29744305

  2. Maximum-likelihood curve-fitting scheme for experiments with pulsed lasers subject to intensity fluctuations.

    PubMed

    Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F

    2003-03-20

    Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.

  3. Design/analysis of the JWST ISIM bonded joints for survivability at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini, Benjamin; Young, Daniel

    2005-08-01

    A major design and analysis challenge for the JWST ISIM structure is thermal survivability of metal/composite adhesively bonded joints at the cryogenic temperature of 30K (-405°F). Current bonded joint concepts include internal invar plug fittings, external saddle titanium/invar fittings and composite gusset/clip joints all bonded to hybrid composite tubes (75mm square) made with M55J/954-6 and T300/954-6 prepregs. Analytical experience and design work done on metal/composite bonded joints at temperatures below that of liquid nitrogen are limited and important analysis tools, material properties, and failure criteria for composites at cryogenic temperatures are sparse in the literature. Increasing this challenge is the difficulty in testing for these required tools and properties at cryogenic temperatures. To gain confidence in analyzing and designing the ISIM joints, a comprehensive joint development test program has been planned and is currently running. The test program is designed to produce required analytical tools and develop a composite failure criterion for bonded joint strengths at cryogenic temperatures. Finite element analysis is used to design simple test coupons that simulate anticipated stress states in the flight joints; subsequently, the test results are used to correlate the analysis technique for the final design of the bonded joints. In this work, we present an overview of the analysis and test methodology, current results, and working joint designs based on developed techniques and properties.

  4. Modeling of forest canopy BRDF using DIRSIG

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan; Schott, John R.

    2016-05-01

    The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.

  5. Simulated sawing of squares: a tool to improve wood utilization

    Treesearch

    R. Bruce Anderson; Hugh W. Reynolds

    1981-01-01

    Manufacturers of turning squares have had difficulty finding the best combination of bolt and square sizes for producing squares most efficiently. A computer simulation technique has been developed for inexpensively detemining the best combination of bolt and square size. Ranges of bolt dimeters to achieve a stated level of yield are given. The manufacturer can choose...

  6. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  7. Scale construction utilising the Rasch unidimensional measurement model: A measurement of adolescent attitudes towards abortion.

    PubMed

    Hendriks, Jacqueline; Fyfe, Sue; Styles, Irene; Skinner, S Rachel; Merriman, Gareth

    2012-01-01

    Measurement scales seeking to quantify latent traits like attitudes, are often developed using traditional psychometric approaches. Application of the Rasch unidimensional measurement model may complement or replace these techniques, as the model can be used to construct scales and check their psychometric properties. If data fit the model, then a scale with invariant measurement properties, including interval-level scores, will have been developed. This paper highlights the unique properties of the Rasch model. Items developed to measure adolescent attitudes towards abortion are used to exemplify the process. Ten attitude and intention items relating to abortion were answered by 406 adolescents aged 12 to 19 years, as part of the "Teen Relationships Study". The sampling framework captured a range of sexual and pregnancy experiences. Items were assessed for fit to the Rasch model including checks for Differential Item Functioning (DIF) by gender, sexual experience or pregnancy experience. Rasch analysis of the original dataset initially demonstrated that some items did not fit the model. Rescoring of one item (B5) and removal of another (L31) resulted in fit, as shown by a non-significant item-trait interaction total chi-square and a mean log residual fit statistic for items of -0.05 (SD=1.43). No DIF existed for the revised scale. However, items did not distinguish as well amongst persons with the most intense attitudes as they did for other persons. A person separation index of 0.82 indicated good reliability. Application of the Rasch model produced a valid and reliable scale measuring adolescent attitudes towards abortion, with stable measurement properties. The Rasch process provided an extensive range of diagnostic information concerning item and person fit, enabling changes to be made to scale items. This example shows the value of the Rasch model in developing scales for both social science and health disciplines.

  8. A User’s Manual for Fiber Diffraction: The Automated Picker and Huber Diffractometers

    DTIC Science & Technology

    1990-07-01

    17 3. Layer line scan of degummed silk ( Bombyx mori ) ................................. 18...index (arbitrary units) Figure 3. Layer line scan of degummed silk ( Bombyx mori ) showing layers 0 through 6. If the fit is rejected, new values for... originally made at intervals larger than 0.010. The smoothing and interpolation is done by a least-squares polynomial fit to segments of the data. The number

  9. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  10. New fit of thermal neutron constants (TNC) for 233,235U, 239,241Pu and 252Cf(sf): Microscopic vs. maxwellian data

    NASA Astrophysics Data System (ADS)

    Pronyaev, Vladimir G.; Capote, Roberto; Trkov, Andrej; Noguere, Gilles; Wallner, Anton

    2017-09-01

    An IAEA project to update the Neutron Standards is near completion. Traditionally, the Thermal Neutron Constants (TNC) evaluated data by Axton for thermal-neutron scattering, capture and fission on four fissile nuclei and the total nu-bar of 252Cf(sf) are used as input in the combined least-square fit with neutron cross section standards. The evaluation by Axton (1986) was based on a least-square fit of both thermal-spectrum averaged cross sections (Maxwellian data) and microscopic cross sections at 2200 m/s. There is a second Axton evaluation based exclusively on measured microscopic cross sections at 2200 m/s (excluding Maxwellian data). Both evaluations disagree within quoted uncertainties for fission and capture cross sections and total multiplicities of uranium isotopes. There are two factors, which may lead to such difference: Westcott g-factors with estimated 0.2% uncertainties used in the Axton's fit, and deviation of the thermal spectra from Maxwellian shape. To exclude or mitigate the impact of these factors, a new combined GMA fit of standards was undertaken with Axton's TNC evaluation based on 2200 m/s data used as a prior. New microscopic data at the thermal point, available since 1986, were added to the combined fit. Additionally, an independent evaluation of TNC was undertaken using CONRAD code. Both GMA and CONRAD results are consistent within quoted uncertainties. New evaluation shows a small increase of fission and capture thermal cross sections, and a corresponding decrease in evaluated thermal nubar for uranium isotopes and 239Pu.

  11. Improvements in Spectrum's fit to program data tool.

    PubMed

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  12. Associations among selected motor skills and health-related fitness: indirect evidence for Seefeldt's proficiency barrier in young adults?

    PubMed

    Stodden, David F; True, Larissa K; Langendorfer, Stephen J; Gao, Zan

    2013-09-01

    This exploratory study examined the notion of Seefeldt's (1980) hypothesized motor skill "proficiency barrier" related to composite levels of health-related physical fitness (HRF) in young adults. A motor skill competence (MSC) index composed of maximum throwing and kicking speed and jumping distance in 187 young adults aged 18 to 25 years old was evaluated against a composite index of 5 health-related fitness (HRF) test scores. MSC (high, moderate, and low) and HRF indexes (good, fair, and poor) were categorized according to normative fitness percentile ranges. 2 separate 3-way chi-square analyses were conducted to determine the probabilities of skill predicting fitness and fitness predicting skill. Most correlations among HRF and MSC variables by gender demonstrated low-to-moderate positive correlations in both men (12/15; r = .23-.58) and women (14/15; r = .21-.53). Chi-square analyses for the total sample, using composite indexes, demonstrated statistically significant predictive models, chi2(1, N = 187) = 66.99, p < .001, Cramer's V = .42. Only 3.1% of low-skilled (2 of 65) individuals were classified as having a "good" HRF. Only 1 participant (out of 65) who demonstrated high MSC was classified as having "poor" HRF (1.5%). Although individual correlations among individual MSC and HRF measures were low to moderate, these data provide indirect evidence for the possibility of a motor skill "proficiency barrier" as indicated by low composite HRF levels. This study may generate future research to address the proficiency barrier hypothesis in youth as well as adults.

  13. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  14. Power and sensitivity of alternative fit indices in tests of measurement invariance.

    PubMed

    Meade, Adam W; Johnson, Emily C; Braddy, Phillip W

    2008-05-01

    Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  15. Perception of competence in middle school physical education: instrument development and validation.

    PubMed

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-03-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.

  16. Does Positivity Mediate the Relation of Extraversion and Neuroticism with Subjective Happiness?

    PubMed Central

    Lauriola, Marco; Iani, Luca

    2015-01-01

    Recent theories suggest an important role of neuroticism, extraversion, attitudes, and global positive orientations as predictors of subjective happiness. We examined whether positivity mediates the hypothesized relations in a community sample of 504 adults between the ages of 20 and 60 years old (females = 50%). A model with significant paths from neuroticism to subjective happiness, from extraversion and neuroticism to positivity, and from positivity to subjective happiness fitted the data (Satorra–Bentler scaled chi-square (38) = 105.91; Comparative Fit Index = .96; Non-Normed Fit Index = .95; Root Mean Square Error of Approximation = .060; 90% confidence interval = .046, .073). The percentage of subjective happiness variance accounted for by personality traits was only about 48%, whereas adding positivity as a mediating factor increased the explained amount of subjective happiness to 78%. The mediation model was invariant by age and gender. The results show that the effect of extraversion on happiness was fully mediated by positivity, whereas the effect of neuroticism was only partially mediated. Implications for happiness studies are also discussed. PMID:25781887

  17. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  18. Multispectrum Analysis of 12CH4 in the v4 Band: I. Air-Broadened Half Widths, Pressure-Induced Shifts, Temperature Dependences and Line Mixing

    NASA Technical Reports Server (NTRS)

    Smith, MaryAnn H.; Benner, D. Chris; Predoi-Cross, Adriana; Venkataraman, Malathy Devi

    2009-01-01

    Lorentz air-broadened half widths, pressure-induced shifts and their temperature dependences have been measured for over 430 transitions (allowed and forbidden) in the v4 band of (CH4)-12 over the temperature range 210 to 314 K. A multispectrum non linear least squares fitting technique was used to simultaneously fit a large number of high-resolution (0.006 to 0.01/cm) absorption spectra of pure methane and mixtures of methane diluted with dry air. Line mixing was detected for pairs of A-, E-, and F-species transitions in the P- and R-branch manifolds and quantified using the off-diagonal relaxation matrix elements formalism. The measured parameters are compared to air- and N2-broadened values reported in the literature for the v4 and other bands. The dependence of the various spectral line parameters upon the tetrahedral symmetry species and rotational quantum numbers of the transitions is discussed. All data used in the present work were recorded using the McMath-Pierce Fourier transform spectrometer located at the National Solar Observatory on Kitt Peak.

  19. Feasibility of using a miniature NIR spectrometer to measure volumic mass during alcoholic fermentation.

    PubMed

    Fernández-Novales, Juan; López, María-Isabel; González-Caballero, Virginia; Ramírez, Pilar; Sánchez, María-Teresa

    2011-06-01

    Volumic mass-a key component of must quality control tests during alcoholic fermentation-is of great interest to the winemaking industry. Transmitance near-infrared (NIR) spectra of 124 must samples over the range of 200-1,100-nm were obtained using a miniature spectrometer. The performance of this instrument to predict volumic mass was evaluated using partial least squares (PLS) regression and multiple linear regression (MLR). The validation statistics coefficient of determination (r(2)) and the standard error of prediction (SEP) were r(2) = 0.98, n = 31 and r(2) = 0.96, n = 31, and SEP = 5.85 and 7.49 g/dm(3) for PLS and MLR equations developed to fit reference data for volumic mass and spectral data. Comparison of results from MLR and PLS demonstrates that a MLR model with six significant wavelengths (P < 0.05) fit volumic mass data to transmittance (1/T) data slightly worse than a more sophisticated PLS model using the full scanning range. The results suggest that NIR spectroscopy is a suitable technique for predicting volumic mass during alcoholic fermentation, and that a low-cost NIR instrument can be used for this purpose.

  20. Analysis of the spectrum of the (5d6+5d56s) -(5d56p+5d46s6p) transitions of two times ionized osmium (Os III)

    NASA Astrophysics Data System (ADS)

    Azarov, Vladimir I.; Tchang-Brillet, W.-Ü. Lydia; Gayasov, Robert R.

    2018-05-01

    The spectrum of osmium was observed in the (225-2100) Å wavelength region. The (5d6 + 5d56s) - (5d56p + 5d46s6p) transition array of two times ionized osmium, Os III, has been investigated and 1039 spectral lines have been classified in the region. The analysis has led to the determination of the 5d6, 5d56s, 5d56p and 5d46s6p configurations. Fifty-eight levels of the 5d6 and 5d56s configurations in the even system and 142 levels of the 5d56p and 5d46s6p configurations in the odd system have been established. The orthogonal operators technique was used to calculate the level structure and transition probabilities. The energy parameters have been determined by the least squares fit to the observed levels. Calculated transition probability and energy values, as well as LS-compositions obtained from the fitted parameters are presented.

  1. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  2. Retrieval of complex χ(2) parts for quantitative analysis of sum-frequency generation intensity spectra

    PubMed Central

    Hofmann, Matthias J.; Koelsch, Patrick

    2015-01-01

    Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297

  3. Discrete square root smoothing.

    NASA Technical Reports Server (NTRS)

    Kaminski, P. G.; Bryson, A. E., Jr.

    1972-01-01

    The basic techniques applied in the square root least squares and square root filtering solutions are applied to the smoothing problem. Both conventional and square root solutions are obtained by computing the filtered solutions, then modifying the results to include the effect of all measurements. A comparison of computation requirements indicates that the square root information smoother (SRIS) is more efficient than conventional solutions in a large class of fixed interval smoothing problems.

  4. Sea surface mean square slope from Ku-band backscatter data

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.; Walton, W. T.; Hines, D. E.; Walter, B. A.; Peng, C. Y.

    1992-01-01

    A surface mean-square-slope parameter analysis is conducted for 14-GHz airborne radar altimeter near-nadir, quasi-specular backscatter data, which in raw form obtained by least-squares fitting of an optical scattering model to the return waveform show an approximately linear dependence over the 7-15 m/sec wind speed range. Slope data are used to draw inferences on the structure of the high-wavenumber portion of the spectrum. A directionally-integrated model height spectrum that encompasses wind speed-dependent k exp -5/2 and classical Phillips k exp -3 power laws subranges in the range of gravity waves is supported by the data.

  5. Effect of scrape-off-layer current on reconstructed tokamak equilibrium

    DOE PAGES

    King, J. R.; Kruger, S. E.; Groebner, R. J.; ...

    2017-01-13

    Methods are described that extend fields from reconstructed equilibria to include scrape-off-layer current through extrapolated parametrized and experimental fits. The extrapolation includes both the effects of the toroidal-field and pressure gradients which produce scrape-off-layer current after recomputation of the Grad-Shafranov solution. To quantify the degree that inclusion of scrape-off-layer current modifies the equilibrium, the χ-squared goodness-of-fit parameter is calculated for cases with and without scrape-off-layer current. The change in χ-squared is found to be minor when scrape-off-layer current is included however flux surfaces are shifted by up to 3 cm. Here the impact on edge modes of these scrape-off-layer modificationsmore » is also found to be small and the importance of these methods to nonlinear computation is discussed.« less

  6. The rotational elements of Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Jacobson, R. A.; Konopliv, A. S.; Park, R. S.; Folkner, W. M.

    2018-03-01

    The International Astronomical Union (IAU) defines planet and satellite coordinate systems relative to their axis of rotation and the angle about that axis. The rotational elements of the bodies are the right ascension and declination of the rotation axis in the International Celestial Reference Frame and the rotation angle, W, measured easterly along the body's equator. The IAU specifies the location of the body's prime meridian by providing a value for W at epoch J2000. We provide new trigonometric series representations of the rotational elements of Mars and its satellites, Phobos and Deimos. The series for Mars are from a least squares fit to the rotation model used to orient the Martian gravity field. The series for the satellites are from a least squares fit to rotation models developed in accordance with IAU conventions from recent ephemerides.

  7. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

    2009-10-01

    Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

  8. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A calculation and uncertainty evaluation method for the effective area of a piston rod used in quasi-static pressure calibration

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2018-04-01

    This paper describes the merits and demerits of different sensors for measuring propellant gas pressure, the applicable range of the frequently used dynamic pressure calibration methods, and the working principle of absolute quasi-static pressure calibration based on the drop-weight device. The main factors affecting the accuracy of pressure calibration are analyzed from two aspects of the force sensor and the piston area. To calculate the effective area of the piston rod and evaluate the uncertainty between the force sensor and the corresponding peak pressure in the absolute quasi-static pressure calibration process, a method for solving these problems based on the least squares principle is proposed. According to the relevant quasi-static pressure calibration experimental data, the least squares fitting model between the peak force and the peak pressure, and the effective area of the piston rod and its measurement uncertainty, are obtained. The fitting model is tested by an additional group of experiments, and the peak pressure obtained by the existing high-precision comparison calibration method is taken as the reference value. The test results show that the peak pressure obtained by the least squares fitting model is closer to the reference value than the one directly calculated by the cross-sectional area of the piston rod. When the peak pressure is higher than 150 MPa, the percentage difference is less than 0.71%, which can meet the requirements of practical application.

  10. Eigen model with general fitness functions and degradation rates

    NASA Astrophysics Data System (ADS)

    Hu, Chin-Kun; Saakian, David B.

    2006-03-01

    We present an exact solution of Eigen's quasispecies model with a general degradation rate and fitness functions, including a square root decrease of fitness with increasing Hamming distance from the wild type. The found behavior of the model with a degradation rate is analogous to a viral quasi-species under attack by the immune system of the host. Our exact solutions also revise the known results of neutral networks in quasispecies theory. To explain the existence of mutants with large Hamming distances from the wild type, we propose three different modifications of the Eigen model: mutation landscape, multiple adjacent mutations, and frequency-dependent fitness in which the steady state solution shows a multi-center behavior.

  11. Automatic evaluation of interferograms

    NASA Technical Reports Server (NTRS)

    Becker, F.

    1982-01-01

    A system for the evaluation of interference patterns was developed. For digitizing and processing of the interferograms from classical and holographic interferometers a picture analysis system based upon a computer with a television digitizer was installed. Depending on the quality of the interferograms, four different picture enhancement operations may be used: Signal averaging; spatial smoothing, subtraction of the overlayed intensity function and the removal of distortion-patterns using a spatial filtering technique in the frequency spectrum of the interferograms. The extraction of fringe loci from the digitized interferograms is performed by a foating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which appeared if there was insufficient contrast in the holograms. The reconstruction of the object function from the fringe field uses least squares approximation with spline fit. Applications are given.

  12. Electronic Transitions of Palladium Monoboride and Platinum Monoboride

    NASA Astrophysics Data System (ADS)

    Ng, Y. W.; Pang, H. F.; Wong, Y. S.; Qian, Yue; Cheung, A. S.-C.

    2012-06-01

    Electronic transition spectrum of palladium monoboride (PdB) and platinum (PtB) monoboride have been studied using the technique of laser-ablation/reaction free jet expansion and laser induced fluorescence spectroscopy. The metal monoborides were produced by reacting laser ablated metal atoms and diborane ((B_2H_6) seeded in argon. Five and six vibrational bands were observed respectively for the PdB and PtB molecules. Preliminary analysis of the rotationally resolved structure showed that both molecules have X2 Σ+ ground state. Least-squares fit of the measured line positions yielded molecular constants for the electronic states involved. Molecular and electronic structures of PdB and PtB are discussed using a molecular orbital energy level diagram. Financial support from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU 701008P) is gratefully acknowledged.

  13. Polynomial approximations of thermodynamic properties of arbitrary gas mixtures over wide pressure and density ranges

    NASA Technical Reports Server (NTRS)

    Allison, D. O.

    1972-01-01

    Computer programs for flow fields around planetary entry vehicles require real-gas equilibrium thermodynamic properties in a simple form which can be evaluated quickly. To fill this need, polynomial approximations were found for thermodynamic properties of air and model planetary atmospheres. A coefficient-averaging technique was used for curve fitting in lieu of the usual least-squares method. The polynomials consist of terms up to the ninth degree in each of two variables (essentially pressure and density) including all cross terms. Four of these polynomials can be joined to cover, for example, a range of about 1000 to 11000 K and 0.00001 to 1 atmosphere (1 atm = 1.0133 x 100,000 N/m sq) for a given thermodynamic property. Relative errors of less than 1 percent are found over most of the applicable range.

  14. SUPPLEMENTARY COMPARISON: EUROMET.L-S10 Comparison of squareness measurements

    NASA Astrophysics Data System (ADS)

    Mokros, Jiri

    2005-01-01

    The idea of performing a comparison of squareness resulted from the need to review the MRA Appendix C, Category 90° square. At its meeting in October 1999 (in Prague) it was decided upon a first comparison of squareness measurements in the framework of EUROMET, numbered #570, starting in 2000, with the Slovak Institute of Metrology (SMU) as the pilot laboratory. During the preparation stage of the project, it was agreed that it should be submitted as a EUROMET supplementary comparison in the framework of the Mutual Recognition Arrangement (MRA) of the Metre Convention and would boost confidence in calibration and measurement certificates issued by the participating national metrology institutes. The aim of the comparison of squareness measurement was to compare and verify the declared calibration measurement capabilities of participating laboratories and to investigate the effect of systematic influences in the measurement process and their elimination. Eleven NMIs from the EUROMET region carried out this project. Two standards were calibrated: granite squareness standard of rectangular shape, cylindrical squareness standard of steel with marked positions for the profile lines. The following parameters had to be calibrated: granite squareness standard: interior angle γB between two lines AB and AC (envelope - LS regression) fitted through the measured profiles, and/or granite squareness standard: interior angle γLS between two LS regression lines AB and AC fitted through the measured profiles, cylindrical squareness standard: interior angles γ0°, γ90°, γ180°, γ270° between the LS regression line fitted through the measurement profiles at 0°, 90°, 180°, 270° and the envelope plane of the basis (resting on a surface plate), local LS straightness deviation for all measured profiles (2 and 4) of both standards. The results of the comparison are the deviations of profiles and angles measured by the individual NMIs from the reference values. These resulted from the weighted mean of data from participating laboratories, while some of them were excluded on the basis of statistical evaluation. Graphical interpretations of all deviations are contained in the Final Report. In order to compare the individual deviations mutually (25 profiles for the granite square and 44 profiles for the cylinder), graphical illustrations of 'standard deviations' and both extreme values (max. and min.) of deviations were created. This regional supplementary comparison has provided independent information about the metrological properties of the measuring equipment and method used by the participating NMIs. The Final Report does not contain the En values. Participants could not estimate some contributions in the uncertainty budget on the basis of previous comparisons, since no comparison of this kind had ever been organized. Therefore the En value cannot reflect the actual state of the given NMI. Instead of En, an analysis has been performed by means of the Grubbs test according to ISO 5725-2. This comparison provided information about the state of provision of metrological services in the field of big squares measurement. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by EUROMET, according to the provisions of the Mutual Recognition Arrangement (MRA).

  15. Line shape parameters of PH3 transitions in the Pentad near 4-5 μm: Self-broadened widths, shifts, line mixing and speed dependence

    NASA Astrophysics Data System (ADS)

    Malathy Devi, V.; Benner, D. Chris; Kleiner, Isabelle; Sams, Robert L.; Fletcher, Leigh N.

    2014-08-01

    Accurate knowledge of spectroscopic line parameters of PH3 is important for remote sensing of the outer planets, especially Jupiter and Saturn. In a recent study, line positions and intensities for the Pentad bands of PH3 have been reported from analysis of high-resolution, high signal-to noise room-temperature spectra recorded with two Fourier transform spectrometers (2014) [1]. The results presented in this study were obtained during the analysis of positions and intensities, but here we focus on the measurements of spectral line shapes (e.g. widths, shifts, line mixing) for the 2ν4, ν2 + ν4, ν1 and ν3 bands. A multispectrum nonlinear least squares curve fitting technique employing a non-Voigt line shape to include line mixing and speed dependence of the Lorentz width was employed to fit the spectra simultaneously. The least squares fittings were performed on five room-temperature spectra recorded at various PH3 pressures (∼2-50 Torr) with the Bruker IFS-125HR Fourier transform spectrometer (FTS) located at the Pacific Northwest National Laboratory (PNNL), in Richland, Washington. Over 840 Lorentz self-broadened half-width coefficients, 620 self-shift coefficients and 185 speed dependence parameters were measured. Line mixing was detected for transitions in the 2ν4, ν1 and ν3 bands, and their values were quantified for 10 A+A- pairs of transitions via off-diagonal relaxation matrix element formalism. The dependences of the measured half-width coefficients on the J and K rotational quanta of the transitions are discussed. The self-width coefficients for the ν1 and ν3 bands from this study are compared to the self-width coefficients for transitions with the same rotational quanta (J, K) reported for the Dyad (ν2 and ν4) bands. The measurements from present study should be useful for the development of a reliable theoretical modeling of pressure-broadened widths, shifts and line mixing in symmetric top molecules with C3v symmetry in general, and of PH3 in particular.

  16. Line shape parameters of PH 3 transitions in the Pentad near 4–5 μm: Self-broadened widths, shifts, line mixing and speed dependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malathy Devi, V.; Benner, D. C.; Kleiner, Isabelle

    2014-08-01

    Accurate knowledge of spectroscopic line parameters of PH 3 is important for remote sensing of the outer planets, especially Jupiter and Saturn. In a recent study, line positions and intensities for the Pentad bands of PH 3 have been reported from analysis of high-resolution, high signal-to noise room-temperature spectra recorded with two Fourier transform spectrometers (2014) [1]. The results presented in this study were obtained during the analysis of positions and intensities, but here we focus on the measurements of spectral line shapes (e.g. widths, shifts, line mixing) for the 2ν 4, ν 2 + ν 4, ν 1 andmore » ν 3 bands. A multispectrum nonlinear least squares curve fitting technique employing a non-Voigt line shape to include line mixing and speed dependence of the Lorentz width was employed to fit the spectra simultaneously. The least squares fittings were performed on five room-temperature spectra recorded at various PH 3 pressures (~2–50 Torr) with the Bruker IFS-125HR Fourier transform spectrometer (FTS) located at the Pacific Northwest National Laboratory (PNNL), in Richland, Washington. Over 840 Lorentz self-broadened half-width coefficients, 620 self-shift coefficients and 185 speed dependence parameters were measured. Line mixing was detected for transitions in the 2ν 4, ν 1 and ν 3 bands, and their values were quantified for 10 A+A- pairs of transitions via off-diagonal relaxation matrix element formalism. The dependences of the measured half-width coefficients on the J and K rotational quanta of the transitions are discussed. The self-width coefficients for the ν 1 and ν 3 bands from this study are compared to the self-width coefficients for transitions with the same rotational quanta (J, K) reported for the Dyad (ν 2 and ν 4) bands. The measurements from present study should be useful for the development of a reliable theoretical modeling of pressure-broadened widths, shifts and line mixing in symmetric top molecules with C 3v symmetry in general, and of PH 3 in particular.« less

  17. Confirmatory Factor Analysis of the Patient Reported Outcomes Measurement Information System (PROMIS) Adult Domain Framework Using Item Response Theory Scores.

    PubMed

    Carle, Adam C; Riley, William; Hays, Ron D; Cella, David

    2015-10-01

    To guide measure development, National Institutes of Health-supported Patient reported Outcomes Measurement Information System (PROMIS) investigators developed a hierarchical domain framework. The framework specifies health domains at multiple levels. The initial PROMIS domain framework specified that physical function and symptoms such as Pain and Fatigue indicate Physical Health (PH); Depression, Anxiety, and Anger indicate Mental Health (MH); and Social Role Performance and Social Satisfaction indicate Social Health (SH). We used confirmatory factor analyses to evaluate the fit of the hypothesized framework to data collected from a large sample. We used data (n=14,098) from PROMIS's wave 1 field test and estimated domain scores using the PROMIS item response theory parameters. We then used confirmatory factor analyses to test whether the domains corresponded to the PROMIS domain framework as expected. A model corresponding to the domain framework did not provide ideal fit [root mean square error of approximation (RMSEA)=0.13; comparative fit index (CFI)=0.92; Tucker Lewis Index (TLI)=0.88; standardized root mean square residual (SRMR)=0.09]. On the basis of modification indices and exploratory factor analyses, we allowed Fatigue to load on both PH and MH. This model fit the data acceptably (RMSEA=0.08; CFI=0.97; TLI=0.96; SRMR=0.03). Our findings generally support the PROMIS domain framework. Allowing Fatigue to load on both PH and MH improved fit considerably.

  18. The long-solved problem of the best-fit straight line: application to isotopic mixing lines

    NASA Astrophysics Data System (ADS)

    Wehr, Richard; Saleska, Scott R.

    2017-01-01

    It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introduce the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods - ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) - have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general - and convenient - solution is always the least biased.

  19. A structural equation model of perceived and internalized stigma, depression, and suicidal status among people living with HIV/AIDS.

    PubMed

    Zeng, Chengbo; Li, Linghua; Hong, Yan Alicia; Zhang, Hanxi; Babbitt, Andrew Walker; Liu, Cong; Li, Lixia; Qiao, Jiaying; Guo, Yan; Cai, Weiping

    2018-01-15

    Previous studies have shown positive association between HIV-related stigma and depression, suicidal ideation, and suicidal attempt among people living with HIV/AIDS (PLWH). But few studies have examined the mechanisms among HIV-related stigma, depression, and suicidal status (suicidal ideation and/or suicidal attempt) in PLWH. The current study examined the relationships among perceived and internalized stigma (PIS), depression, and suicidal status among PLWH in Guangzhou, China using structural equation modeling. Cross-sectional study by convenience sampling was conducted and 411 PLWH were recruited from the Number Eight People's Hospital from March to June, 2013 in Guangzhou, China. Participants were interviewed on their PIS, depressive symptoms, suicidal status, and socio-demographic characteristics. PLWH who had had suicidal ideation and suicidal attempts since HIV diagnosis were considered to be suicidal. Structural equation model was performed to examine the direct and indirect associations of PIS and suicidal status. Indicators to evaluate goodness of fit of the structural equation model included Chi-square Statistic, Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean Square Residual (SRMR), and Weighted Root Mean Square Residual (WRMR). More than one-third (38.4%) of the PLWH had depressive symptoms and 32.4% reported suicidal ideation and/or attempt since HIV diagnosis. The global model showed good model fit (Chi-square value = 34.42, CFI = 0.98, RMSEA = 0.03, WRMR = 0.73). Structural equation model revealed that direct pathway of PIS on suicidal status was significant (standardized pathway coefficient = 0.21), and indirect pathway of PIS on suicidal status via depression was also significant (standardized pathway coefficient = 0.24). There was a partial mediating effect of depression in the association between PIS and suicidal status. Our findings suggest that PIS is associated with increased depression and the likelihood of suicidal status. Depression is in turn positively associated with suicidal status and plays a mediating role between PIS and suicidal status. Therefore, to reduce suicidal ideation and attempt in PLWH, targeted interventions to reduce PIS and improve mental health status of PLWH are warranted.

  20. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    NASA Astrophysics Data System (ADS)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

Top